But it just seems to me that if an agent is assigning a positive value to “holding tentative conclusions,” then still it must be presupposing that self-perpetuation is a necessary feature of the universe.
The "tentative conclusions" were the assertions that all values are unnecessary and self-termination is the overriding goal. I question whether all finite capacity BTH intelligence agents would self-terminate if they had proper modal-world faculties and they all tentatively concluded self-termination was the overriding goal. Expressing confidence in one's assertions does not require certainty, however the question remains of whether the doubt is sufficient to default to self-pertuation until a more certain answer can be arrived at. Here is where we may differ in our argument. I think your point is that the burden of proof is on self-perpetuation since one must presume necessitation in self-perpetuation. When I said that BTH intelligence agents may choose self-perpetuation I did not mean that they presume necessitation, although perhaps necessitation is required in any case and you were simply correcting me.
I acknowledge that it's likely evolutionary spawned intelligence innately irrationally favours self-preservation. If you read my statement 2 posts earlier, you'll see I agree that your dilemma may indeed prevent BTH intelligence agents from self-perpetuating themselves and ultimately your claim that BTH intelligence is implausible may be correct. However, I also claimed that it may difficult to anticipate what existential dilemmas a better-than-human intelligence agent would face, since we're humans with human intelligence and by definition limited in our ability to make such predictions.
Edited by cosmos, 26 November 2004 - 08:35 AM.