I could be mistaken with this nitpick but isn't there a unit mismatch in "...just 20 watts—the same amount of electricity that powers two LED lightbulbs for 24 hours..."?
There is indeed; Watts aren't energy, and it's a common enough mistake that Technology Connections made a pretty good 52 minute video about it the other month [1].
Philosophical thought: if the aim of this field is to create an artificial human brain, then it would be fair to say that the more advanced the field becomes, the less difference there is between the artificial brain and a real brain. This begs two questions:
1) Is the ultimate form of this technology ethically distinguishable from a slave?
2) Is there an ethical difference between bioengineering an actual human brain for computing purposes, versus constructing a digital version that is functionally identical?
For most applications, we don’t want “functionally identical”. We do not want it to have its own desires and a will of its own, biological(-analogous) needs, having a circadian rhythm, getting tired and needing sleep, being subject to mood changes and emotional swings, feeling pain, having a sexual drive, needing recognition and validation, and so on. So we don’t want to copy the neural and bodily correlates that give rise to those phenomena, which arguably are not essential to how the human brain manages to have the intelligence it has. That is likely to drastically change the ethics of it. We will have to learn more about how those things work in the brain to avoid the undesirables.
If we back away from philosophy and think like engineers, I think you're entirely right and the question should be moot. I can't help but think, though, that in spite of it all, the Elon Musks and Sam Altmans of the future will not be stopped from attempting to create something indistinguishable from flesh and blood.
Sorry, but no I think it overemphasizes the parent over the resultant progeny for no reason, and as such I think the story is limited in its vision and treatment of the subject.
I don’t think there’s any distinction between sentience resulting from organic or digital processes. All organic brains are subject to some manner of stochasticity which determines emergent behavioral properties, and I think the same will be true of digital or hybrid brains.
So if you clone me in digital form, it’s not me anymore—it’s not even my twin, it’s something which was inspired from me, but it’s not me. This is now a distinct individual because of the random processes which govern behavior or personality etc., a different person, so to speak. So I never appreciated why MC felt any attachment or responsibility towards his images, other than perhaps the kindness you’d exhibit towards other persons.
The images, or persons as I’d like to think of them, in the story were shown as sentient. But sentience is only one part of consciousness, and the images in the story Lena seem incapable of self-determination. Or maybe they’re some equivalent of a stunted form of animal consciousness, not human consciousness. Human consciousness is assertive about its right to self-determine by nature of being an apex organism.
But even cows and sheep get mad and murderous when you’re unkind to them. Donkeys will lash out if you’re being a jerk. So I think two things: 1) simply creating an image of behaviors is not creating consciousness, and 2) human consciousness possesses a distinct quality of self-determination.
The main thing I’ve noticed about conscious beings is that they have a will to assert themselves, and those that don’t possess or demonstrate that quality to an appreciable degree in animals or humans are usually physiologically damaged (maybe malnutrition or trauma). I don’t expect consciousness born out of digital processes to be any different.
To 1) and 2), assuming a digital consciousness capable of self-awareness and introspection, I think the answer is clearly 'no'.
But:
> it would be fair to say that the more advanced the field becomes, the less difference there is between the artificial brain and a real brain.
I don't think it would be fair to say this. LLMs are certainly not worthy of ethical considerations. Consciousness needs to be demonstratable. Even if the synaptic structure of the digital vs. human brain approaches 1:1 similarity, the program running on it does not deserve ethical consideration unless and until consciousness can be demonstrated as an emergent property.
We should start by disambiguating intelligence and qualia. The field is trying to create intelligence, and kind of assuming that qualia won't be created alongside it.
"Qualia" is a meaningless term made up so that philosophers can keep publishing meaningless papers. It's completely unfalsifiable: there is no test you can even theoretically run to determine the existence or nonexistence of qualia. There's never a reason to concern yourself with it.
The test I use to determine that there exist qualia is “looking”. Now, whether there is a test I can do to confirm that there is any that anything(/anyone) other than me experiences, is another question. (I don’t see how there could be such a test, but perhaps I just don’t see it.)
So, probably not really falsifiable in the sense you are considering, yeah.
I don’t think that makes it meaningless, nor a worthless idea. It probably makes it not a scientific idea?
If you care about subjective experiences, it seems to make sense that you would then concern oneself with subjective experiences.
For the great lookup table Blockhead, whose memory banks take up a galaxy’s worth of space, storing a lookup table of responses for any possible partial conversation history with it, should we value not “hurting its feelings”? If not, why not? It responds just like how a person in an online one-on-one chat would.
Is “Is this [points at something] a moral patient?” a question amenable to scientific study? It doesn’t seem like it to me. How would you falsify answers of “yes” or “no”? But, I refuse to reject the question as “meaningless”.
The term has some validity as a word for what I take to be the inner perception of processes within the brain. The qualia of a scent, for example, can be taken to refer to the inner processing of scent perception giving rise to a secondary perception of that processing (or other side effects of that processing, like evoking associated memories). I strongly suspect that that’s what’s actually going on when people talk about how it feels like to see red, and the like.
You can't be serious. Whatever one wishes to say about the framing, you cannot deny conscious experience. Materialism painted itself into this corner through its bad assumptions. Pretending it hasn't produced this problem for itself, that it doesn't exist, is just plain silly.
Time to show some intellectual integrity and revisit those assumptions.
I am certain this answer will change as generations pass. The current generations, us, will say that there is a difference. Once a generation of kids grow up with AI assistants/friends/partners/etc... they will have a different view. They will demand rights and protections for their AI.
The burden of proof is to show that there is any real or substantive similarity between the two beyond some superficial comparisons and numbers. If you can't provide that, then you can't answer those questions meaningfully.
(Frankly, this is all a category mistake. Human minds possess intentionality. They possess semantic apprehension. Computers are, by definition, abstract mathematical models that are purely syntactic and formal and therefore stripped of semantic content and intentionality. That is exactly what allows computation to be 'physically realizable' or 'mechanized', whether the simulating implementation is mechanical or electrical or whatever. There's a good deal of ignorant and wishy-washy magical thinking in this space that seems to draw hastily from superficial associations like "both (modern) computers and brains involve electrical phenomena" or "computers (appear to) calculate, and so do human beings", and so on.)
I've been building a 'neuromorphic' kernel/bare metal OS that operates on mac hardware using APL primitives as its core layer. Time is considered another 'position' and the kernel itself is vector oriented using 4d addressing with a 32x32x32 'neural substrate'.
I am so ready and eager for a paradigm shift of hardware & software. I think in the future 'software' will disappear for most people, and they'll simply ask and receive.
They pivoted to regular deep learning when Jeff stepped away from the company several years ago. It does not appear they're doing much of brain modeling these days. Last publication was 3 years ago.
Neuromorphic computation has been hyped up for ~ 20 year by now. So far it has dramatically underperformed, at least vis-a-vis the hype.
The article does not distinguish between training and inference. Google Edge TPUs https://coral.ai/products/ each one is capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power—that's 2 TOPS per watt. So inference is already cheaper than the 20 watts the paper attributes to the brain. To be sure, LLM training is expensive, but so is raising a child for 20 years. Unlike the child, LLMs can share weights, and amortise the energy cost of training.
Another core problem with neuromorphic computation is that we currently have no meaningful idea how the brain produces intelligence, so it seems to be a bit premature to claim we can copy this mechanism. Here is what the Nvidia Chief Scientist B. Dally (and one of the main
developers of modern GPU architectures) says about the subject: "I keep getting those calls from those people who claim they are
doing neuromorphic computing and they claim there is something
magical about it because it's the way that the brain works ... but
it's truly more like building an airplane by putting feathers on it
and flapping with the wings!"
From "Hardware for Deep Learning" HotChips 2023 keynote.
https://www.youtube.com/watch?v=rsxCZAE8QNA
This is at 21:28.
The whole talk is brilliant and worth watching.
Once again, I am quite surprised by the sudden uptick of AI content on HN coming out of LANL. Does anyone know if its just getting posted to HN and staying on the first page suddenly, or is this a change in strategy for the lab? Even so, I don't see the other NatLabs showing up like this.
The primary pool of money for DOE labs is through a program called "Frontiers in Artificial Intelligence for Science, Security and Technology (FASST)," replacing the Exascale Computing Project. Compared to other labs, LANL historically does not have many dedicated ML/AI groups but they have recently spun up an entire branch to help secure as much of that FASST money as possible.
I am not sure why HN has mostly LANL posts.
Otherwise though it is a combination of things. Machine learning applications for NATSec & fundamental research have become more important (see FASST, proposed last year), the current political environment makes AI funding and applications more secure and easier to chase, and some of this is work that has already been going on but getting greater publicity for both of those reasons.
I imagine the mood at the national labs right now is pretty panicky. They will be looking to get involved with more real-world applications than they traditionally have been, and will also want to appear more engaged with trendy technologies.
I could be mistaken with this nitpick but isn't there a unit mismatch in "...just 20 watts—the same amount of electricity that powers two LED lightbulbs for 24 hours..."?
Just 20 watts, the same amount of electricity that powers 2 LED lightbulbs for 24 hours, one nanosecond, or twelve-thousand years.
There is indeed; Watts aren't energy, and it's a common enough mistake that Technology Connections made a pretty good 52 minute video about it the other month [1].
[1]: https://www.youtube.com/watch?v=OOK5xkFijPc
Surprising that the article was not reviewed enough to ensure accurate use of basic physics concepts .. from LANL!!!
Philosophical thought: if the aim of this field is to create an artificial human brain, then it would be fair to say that the more advanced the field becomes, the less difference there is between the artificial brain and a real brain. This begs two questions:
1) Is the ultimate form of this technology ethically distinguishable from a slave?
2) Is there an ethical difference between bioengineering an actual human brain for computing purposes, versus constructing a digital version that is functionally identical?
For most applications, we don’t want “functionally identical”. We do not want it to have its own desires and a will of its own, biological(-analogous) needs, having a circadian rhythm, getting tired and needing sleep, being subject to mood changes and emotional swings, feeling pain, having a sexual drive, needing recognition and validation, and so on. So we don’t want to copy the neural and bodily correlates that give rise to those phenomena, which arguably are not essential to how the human brain manages to have the intelligence it has. That is likely to drastically change the ethics of it. We will have to learn more about how those things work in the brain to avoid the undesirables.
If we back away from philosophy and think like engineers, I think you're entirely right and the question should be moot. I can't help but think, though, that in spite of it all, the Elon Musks and Sam Altmans of the future will not be stopped from attempting to create something indistinguishable from flesh and blood.
I mean have you watched Westworld?
In my opinion, one of the best works of fiction exploring this is qntm's "Lena" - https://qntm.org/mmacevedo
Sorry, but no I think it overemphasizes the parent over the resultant progeny for no reason, and as such I think the story is limited in its vision and treatment of the subject.
Please say more
Please bear with me, I read it a long time ago.
I don’t think there’s any distinction between sentience resulting from organic or digital processes. All organic brains are subject to some manner of stochasticity which determines emergent behavioral properties, and I think the same will be true of digital or hybrid brains.
So if you clone me in digital form, it’s not me anymore—it’s not even my twin, it’s something which was inspired from me, but it’s not me. This is now a distinct individual because of the random processes which govern behavior or personality etc., a different person, so to speak. So I never appreciated why MC felt any attachment or responsibility towards his images, other than perhaps the kindness you’d exhibit towards other persons.
The images, or persons as I’d like to think of them, in the story were shown as sentient. But sentience is only one part of consciousness, and the images in the story Lena seem incapable of self-determination. Or maybe they’re some equivalent of a stunted form of animal consciousness, not human consciousness. Human consciousness is assertive about its right to self-determine by nature of being an apex organism.
But even cows and sheep get mad and murderous when you’re unkind to them. Donkeys will lash out if you’re being a jerk. So I think two things: 1) simply creating an image of behaviors is not creating consciousness, and 2) human consciousness possesses a distinct quality of self-determination.
The main thing I’ve noticed about conscious beings is that they have a will to assert themselves, and those that don’t possess or demonstrate that quality to an appreciable degree in animals or humans are usually physiologically damaged (maybe malnutrition or trauma). I don’t expect consciousness born out of digital processes to be any different.
To 1) and 2), assuming a digital consciousness capable of self-awareness and introspection, I think the answer is clearly 'no'.
But:
> it would be fair to say that the more advanced the field becomes, the less difference there is between the artificial brain and a real brain.
I don't think it would be fair to say this. LLMs are certainly not worthy of ethical considerations. Consciousness needs to be demonstratable. Even if the synaptic structure of the digital vs. human brain approaches 1:1 similarity, the program running on it does not deserve ethical consideration unless and until consciousness can be demonstrated as an emergent property.
We should start by disambiguating intelligence and qualia. The field is trying to create intelligence, and kind of assuming that qualia won't be created alongside it.
How would you go about disambiguating them? Isn't that literally the "hard problem of consciousness" [0]?
[0] https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
"Qualia" is a meaningless term made up so that philosophers can keep publishing meaningless papers. It's completely unfalsifiable: there is no test you can even theoretically run to determine the existence or nonexistence of qualia. There's never a reason to concern yourself with it.
The test I use to determine that there exist qualia is “looking”. Now, whether there is a test I can do to confirm that there is any that anything(/anyone) other than me experiences, is another question. (I don’t see how there could be such a test, but perhaps I just don’t see it.)
So, probably not really falsifiable in the sense you are considering, yeah.
I don’t think that makes it meaningless, nor a worthless idea. It probably makes it not a scientific idea?
If you care about subjective experiences, it seems to make sense that you would then concern oneself with subjective experiences.
For the great lookup table Blockhead, whose memory banks take up a galaxy’s worth of space, storing a lookup table of responses for any possible partial conversation history with it, should we value not “hurting its feelings”? If not, why not? It responds just like how a person in an online one-on-one chat would.
Is “Is this [points at something] a moral patient?” a question amenable to scientific study? It doesn’t seem like it to me. How would you falsify answers of “yes” or “no”? But, I refuse to reject the question as “meaningless”.
The term has some validity as a word for what I take to be the inner perception of processes within the brain. The qualia of a scent, for example, can be taken to refer to the inner processing of scent perception giving rise to a secondary perception of that processing (or other side effects of that processing, like evoking associated memories). I strongly suspect that that’s what’s actually going on when people talk about how it feels like to see red, and the like.
Except that philosophers can keep publishing meaningless papers regardless.
Drinking from the eliminativist hose, are we?
You can't be serious. Whatever one wishes to say about the framing, you cannot deny conscious experience. Materialism painted itself into this corner through its bad assumptions. Pretending it hasn't produced this problem for itself, that it doesn't exist, is just plain silly.
Time to show some intellectual integrity and revisit those assumptions.
I am certain this answer will change as generations pass. The current generations, us, will say that there is a difference. Once a generation of kids grow up with AI assistants/friends/partners/etc... they will have a different view. They will demand rights and protections for their AI.
Disagree. It would be like saying the more advanced transportation becomes, then more like a horse it will be.
Shining-brass 25 ton, coal-powered, steam-driven autohorse! 8 legs! Tireless! Breathes fire!
*shower thought
3) can we use a dead person's brain, hook up wires to it and oxygen, why not
The burden of proof is to show that there is any real or substantive similarity between the two beyond some superficial comparisons and numbers. If you can't provide that, then you can't answer those questions meaningfully.
(Frankly, this is all a category mistake. Human minds possess intentionality. They possess semantic apprehension. Computers are, by definition, abstract mathematical models that are purely syntactic and formal and therefore stripped of semantic content and intentionality. That is exactly what allows computation to be 'physically realizable' or 'mechanized', whether the simulating implementation is mechanical or electrical or whatever. There's a good deal of ignorant and wishy-washy magical thinking in this space that seems to draw hastily from superficial associations like "both (modern) computers and brains involve electrical phenomena" or "computers (appear to) calculate, and so do human beings", and so on.)
I've been building a 'neuromorphic' kernel/bare metal OS that operates on mac hardware using APL primitives as its core layer. Time is considered another 'position' and the kernel itself is vector oriented using 4d addressing with a 32x32x32 'neural substrate'.
I am so ready and eager for a paradigm shift of hardware & software. I think in the future 'software' will disappear for most people, and they'll simply ask and receive.
I'd love to read more about this. Do you have a blog?
And still no mention of Numenta… I’ve always felt it’s an underrated company, built on an even more underrated theory of intelligence
I want them to succeed but it's been two decades already. Maybe they should have started with a less challenging problem to grow the company?
They will be right on time when the first Mill CPU arrives!
They pivoted to regular deep learning when Jeff stepped away from the company several years ago. It does not appear they're doing much of brain modeling these days. Last publication was 3 years ago.
Neuromorphic computation has been hyped up for ~ 20 year by now. So far it has dramatically underperformed, at least vis-a-vis the hype.
The article does not distinguish between training and inference. Google Edge TPUs https://coral.ai/products/ each one is capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power—that's 2 TOPS per watt. So inference is already cheaper than the 20 watts the paper attributes to the brain. To be sure, LLM training is expensive, but so is raising a child for 20 years. Unlike the child, LLMs can share weights, and amortise the energy cost of training.
Another core problem with neuromorphic computation is that we currently have no meaningful idea how the brain produces intelligence, so it seems to be a bit premature to claim we can copy this mechanism. Here is what the Nvidia Chief Scientist B. Dally (and one of the main developers of modern GPU architectures) says about the subject: "I keep getting those calls from those people who claim they are doing neuromorphic computing and they claim there is something magical about it because it's the way that the brain works ... but it's truly more like building an airplane by putting feathers on it and flapping with the wings!" From "Hardware for Deep Learning" HotChips 2023 keynote. https://www.youtube.com/watch?v=rsxCZAE8QNA This is at 21:28. The whole talk is brilliant and worth watching.
Just searched against HN, seems this term is at least 8 years old
The term neuromorphic? It was coined in 1990: https://ieeexplore.ieee.org/abstract/document/58356
Once again, I am quite surprised by the sudden uptick of AI content on HN coming out of LANL. Does anyone know if its just getting posted to HN and staying on the first page suddenly, or is this a change in strategy for the lab? Even so, I don't see the other NatLabs showing up like this.
Probably because they're hosting an exascale-class cluster with a bazillion GH200s. Also, they launched a new "National Security AI Office".
The primary pool of money for DOE labs is through a program called "Frontiers in Artificial Intelligence for Science, Security and Technology (FASST)," replacing the Exascale Computing Project. Compared to other labs, LANL historically does not have many dedicated ML/AI groups but they have recently spun up an entire branch to help secure as much of that FASST money as possible.
I am not sure why HN has mostly LANL posts. Otherwise though it is a combination of things. Machine learning applications for NATSec & fundamental research have become more important (see FASST, proposed last year), the current political environment makes AI funding and applications more secure and easier to chase, and some of this is work that has already been going on but getting greater publicity for both of those reasons.
I imagine the mood at the national labs right now is pretty panicky. They will be looking to get involved with more real-world applications than they traditionally have been, and will also want to appear more engaged with trendy technologies.
memristors are back