I think that the “most” in the sentence “most philosophers and AI people do think that neurol networks can be conscious if they run the right algorithm” is an overstatement, though I do not know to what extent.
It is probably an overstatement. At least among philosophers in the 2020 Philpapers survey, most of the relevant questions would put that at a large but sub-majority position: 52% embrace physicalism (which is probably an upper bound); 54% say uploading = death; and 39% “Accept or lean towards: future AI systems [can be conscious]”. So, it would be very hard to say that ‘most philosophers’ in this survey would endorse an artificial neural network with an appropriate scale/algorithm being conscious.
It is probably an overstatement. At least among philosophers in the 2020 Philpapers survey, most of the relevant questions would put that at a large but sub-majority position: 52% embrace physicalism (which is probably an upper bound); 54% say uploading = death; and 39% “Accept or lean towards: future AI systems [can be conscious]”. So, it would be very hard to say that ‘most philosophers’ in this survey would endorse an artificial neural network with an appropriate scale/algorithm being conscious.