Its nice discussing things with you, Mind. But you never tackle the tough questions and just make a general statement. For example no answer to the following:
Isn't that a good thing? Robots killing robots instead of humans?
How is it that "we could all end up dead"?
My impression of chatgpt and the like is that they are not very intelligent. This is true of computers in general, good at processing tons of data and following instructions but no original thought. AI in its present form just seems to add another layer of processing making it able to follow more complex rules giving the appearance of human thought at times.
Nobody, not you or anyone else has explained how a machine could become "evil" and have "desire" to do harm. Don't they simply follow their program? You can program a robot to shoot guns and fire rockets etc but its no more evil than a gun that needs a trigger pull. Thats why I constantly scoff at the anthropomorphizing that goes on, "rogue" robots, "killer" robots, and so on implying consciousness and will independent of any programming.