• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
* * * * * 8 votes

Val's Nanotech discussion thread


  • Please log in to reply
466 replies to this topic

#31 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 14 January 2010 - 05:41 AM

Again two things which worry me

1) wishful thinking - we don't to work and prove something rather than just hope. I feel terror when I hold to wishful thoughts and I really hope we can reverse entropy and gain more energy and fight gravity completely somehow.
2) Evidence? what evidence? you mean more like speculations. If there were evidence the mainstream looked different by now



No, actual evidence. John Keely made a motor that apparently worked using the earths magnetic field for power. Sadly it seems even he didn't know how he manged to make it work. The demo worked quite well, but his theories as to why it worked devolved into mysticism, and he ended up discrediting himself.

Tesla also claimed to have made a motor that ran off the EMF. Wardenclyff was supposed to be his huge unveiling of his invention, but Westinghouse bankrupted him first.

And recently Nasa did an experiment in space to generate power via the EMF. It worked too well. So much power was generated that it melted the transmission line.

These show that the Earth itself could in theory provide all the electrical power we need. It's simply a matter of refining the means to harvest it.


And now, back to our regularly scheduled Nano News!

From H+ and Michael Anissimov:

http://hplusmagazine...es-100-accuracy

In a 2009 article in Nature Nanotechnology, Dr. Seeman shared the results of experiments performed by his lab, along with collaborators at Nanjing University in China, in which scientists built a two-armed nanorobotic device with the ability to place specific atoms and molecules where scientists want them. The device was approximately 150 x 50 x 8 nanometers in size — over a million could fit in a single red blood cell. Using robust error-correction mechanisms, the device can place DNA molecules with 100% accuracy. Earlier trials had yielded only 60-80% accuracy.

The nanorobotic arm is built out of DNA origami: large strands of DNA gently encouraged to fold in precise ways by interaction with a few hundred short DNA strands. The products, around 100 nanometers in diameter, are eight times larger and three times more complex than what could be built with a simple crystalline DNA array, vastly expanding the space of possible structures. Other nanoscale structures or machines built by Dr. Seeman and his collaborators including a nanoscale walking biped, truncated DNA octahedrons, and sequence-dependent molecular switch arrays. Dr. Seeman has exploited structural features of DNA thought to be used in genetic recombination to operate his nanoscale devices, tapping into the very processes underlying all life.

The advances in DNA nanotechnology keep coming, and many observers are wondering if this will be the path that leads us to the next Industrial Revolution. Only time — and many more experiments — will tell.


One small placement for DNA nanoarms... One GIANT LEAP for mechanosynthesis.

Drexlerian Nanotech is looking closer all the time. XD

#32 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 14 January 2010 - 06:43 AM

Yeah but the earth won't be here forever (edit: unless we prevent its demise) :D we need energy source that will :)

Edited by Luna, 14 January 2010 - 06:43 AM.


#33 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 14 January 2010 - 09:43 AM

Yeah but the earth won't be here forever (edit: unless we prevent its demise) :D we need energy source that will :)



Considering a lot of the earths power comes from interactions with the Sun's EMF, which in turn likely draws it's power from the galaxies EMF... well I could go on, but as Electric Universe Theory is virulently attacked by cosmologists who refuse to recognize electrical activity in space, despite the fact that the Voyager just passed through the plasma double layer and emerged into a ionized current carrying cloud of hydrogen gas... well, what can I say. The Evidence keeps stacking up, and sooner or later they will have to accept the evidence over their pretty little numbers.

sponsored ad

  • Advert

#34 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 21 January 2010 - 04:38 AM

And back on GRAPHENE.

http://nextbigfuture...developing.html

The UK National Physical Laboratory is part of a european collaborative research project. They have brought the world a step closer to producing a new material on which future nanotechnology could be based. Researchers across Europe, including NPL, have demonstrated how an incredible material, graphene, could hold the key to the future of high-speed electronics, such as micro-chips and touchscreen technology.

A paper published in Nature Nanotechnology explains how researchers have, for the first time, produced graphene to a size and quality where it can be practically developed and successfully measured its electrical characteristics. These significant breakthroughs overcome two of the biggest barriers to scaling up the technology.

This project saw researchers, for the first time, produce and successfully operate a large number of electronic devices from a sizable area of graphene layers (approximately 50 mm^2). [ two inch square. There have been other researchers who have created 4 inch wafers of graphene.]


Translation. Break through number one: They can make graphene sheets of sizes comparable to current silicon wafer sizes, and can theoretically create sheets of graphene of any size.

Which means? Added to the fact that they have found a way to replace copper directly with graphene, and can now make graphene in sufficient quantity and quality, it means graphene processors may only be a few years away, dramatically increasing the speed of processors and dropping the cost per transistor by an order of magnitude. It could also mean an similar order of magnitude jump in numbers of operations per second, as graphene seems likely to be able to run at double or triple the speed of current chips while using less power and producing less heat. It's possible Moore's law is about to be a double exponential curve. 2012 to 2015 could be an amazing time for computer advances.

However, being able to create large sheets of graphene could lead to numerous other advances as well. For one thing, it is essentially a diamond one atom thick. Can we say monomolecular knife? or how about woven strips of graphene in a plastic matrix creating ultratough, ultralight materials for the skins of airplanes or the bodies of cars, not to mention hundreds of other potential construction uses. Electronics is not the only potential winner from this advance.

Break through Number two is a little harder to quantify, but equally as significant. It involves measuring the electrical properties of graphene compared to current electronics devices. The electrical properties of semiconductors is the most important necessary knowledge in electronics design. It's like tensile strength for engineering. It's what allows the prediction of electron behavior in a semiconductor, which is what makes electronics possible at all.

The break down is that Graphene allows more precise predictions than silicon. That circuits that can be designed far more stably than current chips, with less crosstalk and noise, requiring less redundancy and error correction control. That means more chip dedicated to computing, faster throughput, and more protection from stray EMF signals.

All of which equals, smaller, faster, more, and possibly lighter, tougher, more efficient.

#35 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 22 January 2010 - 09:30 PM

And today:

http://nextbigfuture...m-diameter.html

The Penn State EOC is a leading center for the synthesis of graphene materials and graphene-based devices. Using a process called silicon sublimation, EOC researchers David Snyder and Randy Cavalero thermally processed silicon carbide wafers in a high temperature furnace until the silicon migrated away from the surface, leaving behind a layer of carbon that formed into a one- to two-atom-thick film of graphene on the wafer surface. The EOC wafers were 100mm in diameter, the largest diameter commercially available for silicon carbide wafers, and exceeded the previous demonstration of 50mm.

According to EOC materials scientist Joshua Robinson, Penn State is currently fabricating field effect transistors on the 100 mm graphene wafers and will begin transistor performance testing in early 2010. A further goal is to improve the speed of electrons in graphene made from silicon carbide wafers to closer to the theoretical speed, approximately 100 times faster than silicon. That will require improvements in the material quality, says Robinson, but the technology is new and there is plenty of room for improvements in processing.

In addition to silicon sublimation, EOC researchers Joshua Robinson, Mark Fanton, Brian Weiland, Kathleen Trumbull and Michael LaBella are developing the synthesis and device fabrication of graphene on silicon as a means to achieve wafer diameters exceeding 200mm, a necessity for integrating graphene into the existing semiconductor industry. With the support of the Naval Surface Warfare Center in Crane, Ind., EOC researchers are initially focusing on graphene materials to improve the transistor performance in various radio frequency (RF) applications.

With its remarkable physical, chemical and structural properties, graphene is being studied worldwide for electronics, displays, solar cells, sensors, and hydrogen storage. Graphene has the potential to enable terahertz computing, at processor speeds 100 to 1,000 times faster than silicon. For a material that was first isolated only five years ago, graphene is getting off to a fast start.


Did you get that?

That's right. Val was wrong about how fast graphene can be.

2x to 3x is just initial speeds.

we could potentially see 100x to 1000x speeds.

GHz? We don need no stinkin GHz...

We's talking TERAHERTZ.

Can we say Moore's law might be about to be broken, because it's WAY TOO SLOW?

#36 Reno

  • Guest
  • 584 posts
  • 37
  • Location:Somewhere

Posted 23 January 2010 - 07:54 AM

You know, it wouldn't surprise me if the chip companies purposefully dragged their progress just to milk the consumer over the next few decades. An instant jump from 4ghz to 1000ghz would change the industry overnight.

Edited by bobscrachy, 23 January 2010 - 07:55 AM.


#37 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 23 January 2010 - 01:12 PM

You know, it wouldn't surprise me if the chip companies purposefully dragged their progress just to milk the consumer over the next few decades. An instant jump from 4ghz to 1000ghz would change the industry overnight.


It's going to take several years to ramp up to those speeds, new chip designs will be needed, refinements made to the process, but I doubt that manufacturers will drag their feet. All it would take would be one company jumping the gun in order to improve market share to make it impossible. It's a kill or be killed market, and laggers will be eaten after all.

No, I think GHz Wars 2 is about to begin. And there are more players this go round than just AMD and Intel. Not to mention computers or no longer the primary market. Smartphones are.

#38 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 23 January 2010 - 07:37 PM

No, I think GHz Wars 2 is about to begin. And there are more players this go round than just AMD and Intel. Not to mention computers or no longer the primary market. Smartphones are.

Hmm. A terahertz machine running the X86 instruction set. Windows will really boogie. I gotta say, though, that until there are apps that can profitably deploy all that speed, the demand might not be there. There's a point where more pixels, polygons, and frames won't make a difference to the user experience. We aren't quite there yet, but I think what we need from here is a lot less than three orders of magnitude improvement in speed. All the speed in the world won't help AI if the algorithms are crappy, although it does open the door to some interesting brain simulations that don't require the largest supercomputer in the solar system. Assuming such speeds materialize, and I think they will sooner or later, what do they portend? A new world of fantastic, heretofore unknown apps? Or a new world of insanely bad, ultra-inefficient code that needs a THz machine just to send email? (Windows 10?) Probably some of both.

#39 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 23 January 2010 - 10:05 PM

No, I think GHz Wars 2 is about to begin. And there are more players this go round than just AMD and Intel. Not to mention computers or no longer the primary market. Smartphones are.

Hmm. A terahertz machine running the X86 instruction set. Windows will really boogie. I gotta say, though, that until there are apps that can profitably deploy all that speed, the demand might not be there. There's a point where more pixels, polygons, and frames won't make a difference to the user experience. We aren't quite there yet, but I think what we need from here is a lot less than three orders of magnitude improvement in speed. All the speed in the world won't help AI if the algorithms are crappy, although it does open the door to some interesting brain simulations that don't require the largest supercomputer in the solar system. Assuming such speeds materialize, and I think they will sooner or later, what do they portend? A new world of fantastic, heretofore unknown apps? Or a new world of insanely bad, ultra-inefficient code that needs a THz machine just to send email? (Windows 10?) Probably some of both.


No.

The insane amount of computing power needed for a smartphone running photorealistic virtual reality programs. Insane amounts of polygons rendered in real time.

Apps? Apps don't even need the power we have now. I'm looking at graphics processing power, Augmented reality overlays, full environment motion capture, etc.

#40 Reno

  • Guest
  • 584 posts
  • 37
  • Location:Somewhere

Posted 24 January 2010 - 05:18 AM

Back in the day I use to think apps were actual applications, or the new cool hip way of saying the nerdy word "programs." Nowadays people say apps in reference to little smartphone programs. It use to be that the smallest program could have the power to shuffle around terabytes of data. Maybe I'm getting too old on all my computer guy talk to know all the new lingo.

#41 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 24 January 2010 - 06:00 AM

No, I think GHz Wars 2 is about to begin. And there are more players this go round than just AMD and Intel. Not to mention computers or no longer the primary market. Smartphones are.

Hmm. A terahertz machine running the X86 instruction set. Windows will really boogie. I gotta say, though, that until there are apps that can profitably deploy all that speed, the demand might not be there. There's a point where more pixels, polygons, and frames won't make a difference to the user experience. We aren't quite there yet, but I think what we need from here is a lot less than three orders of magnitude improvement in speed. All the speed in the world won't help AI if the algorithms are crappy, although it does open the door to some interesting brain simulations that don't require the largest supercomputer in the solar system. Assuming such speeds materialize, and I think they will sooner or later, what do they portend? A new world of fantastic, heretofore unknown apps? Or a new world of insanely bad, ultra-inefficient code that needs a THz machine just to send email? (Windows 10?) Probably some of both.

No.

The insane amount of computing power needed for a smartphone running photorealistic virtual reality programs. Insane amounts of polygons rendered in real time.

Apps? Apps don't even need the power we have now. I'm looking at graphics processing power, Augmented reality overlays, full environment motion capture, etc.

We don't need insane numbers of polygons; we just need enough polygons, enough pixels, and enough frames. But yeah, I guess if we want to run Pixar's greatest renderer on a virtual world at 4k x 4k and 120fps, then we will need a lot of power. We actually have Tflop graphics processors now, so is it safe to assume that they will be Pflop when implemented in graphene? I still contend that there is a point at which the graphics do not need to get any better because they exceed the capacity of our perceptual apparatus. I also still suspect that more power will result in even less-efficient code, in addition to great new things.

Bobscrachy is correct that the term "apps" has been absconded with by smartphones. I was using it in the older form of "programs", generally oriented toward a single user, like a PC app, but also including "real" programs with names that are all-uppercase acronyms, that run under real operating systems. Ultimately, they're all just programs, a bunch of machine code.

#42 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 24 January 2010 - 07:01 AM

No, I think GHz Wars 2 is about to begin. And there are more players this go round than just AMD and Intel. Not to mention computers or no longer the primary market. Smartphones are.

Hmm. A terahertz machine running the X86 instruction set. Windows will really boogie. I gotta say, though, that until there are apps that can profitably deploy all that speed, the demand might not be there. There's a point where more pixels, polygons, and frames won't make a difference to the user experience. We aren't quite there yet, but I think what we need from here is a lot less than three orders of magnitude improvement in speed. All the speed in the world won't help AI if the algorithms are crappy, although it does open the door to some interesting brain simulations that don't require the largest supercomputer in the solar system. Assuming such speeds materialize, and I think they will sooner or later, what do they portend? A new world of fantastic, heretofore unknown apps? Or a new world of insanely bad, ultra-inefficient code that needs a THz machine just to send email? (Windows 10?) Probably some of both.

No.

The insane amount of computing power needed for a smartphone running photorealistic virtual reality programs. Insane amounts of polygons rendered in real time.

Apps? Apps don't even need the power we have now. I'm looking at graphics processing power, Augmented reality overlays, full environment motion capture, etc.

We don't need insane numbers of polygons; we just need enough polygons, enough pixels, and enough frames. But yeah, I guess if we want to run Pixar's greatest renderer on a virtual world at 4k x 4k and 120fps, then we will need a lot of power. We actually have Tflop graphics processors now, so is it safe to assume that they will be Pflop when implemented in graphene? I still contend that there is a point at which the graphics do not need to get any better because they exceed the capacity of our perceptual apparatus. I also still suspect that more power will result in even less-efficient code, in addition to great new things.

Bobscrachy is correct that the term "apps" has been absconded with by smartphones. I was using it in the older form of "programs", generally oriented toward a single user, like a PC app, but also including "real" programs with names that are all-uppercase acronyms, that run under real operating systems. Ultimately, they're all just programs, a bunch of machine code.



hummm. It seems I'm still not being clear enough.

Okay, let's try this. Imagine a mall, with 3000 people. Each person has a VRphone, which is generating a rendered mirror of the enviroment, everything in the environment within each individual view, including all real items a well as all virtual items associated with every person within view, simultainiously tracking the individuals motion down to millimeter precision by a combination of lidar mounted in the lenses, networked with the lidars of all phones nearby, and the lidars of the various survellance cameras to enable that precise motion tracking, and provide the 3d mesh which enables it all to blend.

In a VR game, yes, once a certain level of complexity is reached, additional polygons are pointless. However, a VR overlay requires a lot more computing power than a simple vr game. It must not only track you, it has to track everyone around you, everything around you, render not only your own personal virtual objects, but all virtual objects used by every person in the environment, be able to differentiate between virtual and real objects, be able to erase non virtual items which lie BEHIND virtual items, erase in real time all movements of those people who chose to exist as an avatar while overlaying their avatar on the same space, etc.

VR of that level of complexity would require dozens of current chips, and might not be able to actually handle all of that in real time, but that same dozen chips running at terahertz speeds as multiple virtual processes could act as hundreds or more current processors each one dedicated to a single process.

Run a hundred core processor at terahertz speeds and you have 100,000 virtual processors of current power available for use. From one chip.

The code might indeed end up being inefficient, but we don't have to advance our programming knowledge to be able to use that kind of power. Simply modify most programs to use distributed computing processes.

But that isn't the sole thing to consider. I've already written to RK to ask him how he thinks such a massive leap in computing ability could affect his predictions. If they can indeed reach terahertz chips, we're essentially looking at reaching brain levels of computing power years to decades ahead of his curve. I have to wonder how that will affect the time to singularity.

Edited by valkyrie_ice, 24 January 2010 - 07:03 AM.


#43 ben951

  • Guest
  • 111 posts
  • 15
  • Location:France

Posted 24 January 2010 - 02:09 PM

If they can indeed reach terahertz chips, we're essentially looking at reaching brain levels of computing power years to decades ahead of his curve. I have to wonder how that will affect the time to singularity.

If I'm not mistaken IBM supercomputer Sequoia will achieve 20 petaflop in 2012 witch is what Kurzweil estimate to be the power of the human brain.
http://www.guardian....ter-ibm-sequoia

So we will reach brain levels of computing power pretty soon even without graphene, at least for supercomputer.

Edited by ben951, 24 January 2010 - 02:18 PM.


#44 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 25 January 2010 - 06:30 AM

If they can indeed reach terahertz chips, we're essentially looking at reaching brain levels of computing power years to decades ahead of his curve. I have to wonder how that will affect the time to singularity.

If I'm not mistaken IBM supercomputer Sequoia will achieve 20 petaflop in 2012 witch is what Kurzweil estimate to be the power of the human brain.
http://www.guardian....ter-ibm-sequoia

So we will reach brain levels of computing power pretty soon even without graphene, at least for supercomputer.

But it doesn't translate in that way. The 20 pflops from sequoia is reached under rather artificial conditions of a perfectly parallelized and pipelined calculation, while the brain's 20 pflops comes from gazillions of really slow task-specific processors that are highly networked. Sequoia couldn't be smart like a human, at least not without some really great software that doesn't exist yet, but a human couldn't solve 20 quadrillion multiplies a second, either.

#45 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 25 January 2010 - 06:50 AM

If they can indeed reach terahertz chips, we're essentially looking at reaching brain levels of computing power years to decades ahead of his curve. I have to wonder how that will affect the time to singularity.

If I'm not mistaken IBM supercomputer Sequoia will achieve 20 petaflop in 2012 witch is what Kurzweil estimate to be the power of the human brain.
http://www.guardian....ter-ibm-sequoia

So we will reach brain levels of computing power pretty soon even without graphene, at least for supercomputer.

But it doesn't translate in that way. The 20 pflops from sequoia is reached under rather artificial conditions of a perfectly parallelized and pipelined calculation, while the brain's 20 pflops comes from gazillions of really slow task-specific processors that are highly networked. Sequoia couldn't be smart like a human, at least not without some really great software that doesn't exist yet, but a human couldn't solve 20 quadrillion multiplies a second, either.


You both might find this intriguing: http://www.physorg.c...s183373216.html

This organic transistor, based on pentacene and gold nanoparticles and known as a NOMFET (Nanoparticle Organic Memory Field-Effect Transistor), has opened the way to new generations of neuro-inspired computers, capable of responding in a manner similar to the nervous system. The study is published in the 22 January 2010 issue of the journal Advanced Functional Materials.

In the development of new information processing strategies, one approach consists in mimicking the way biological systems such as neuron networks operate to produce electronic circuits with new features. In the nervous system, a synapse is the junction between two neurons, enabling the transmission of electric messages from one neuron to another and the adaptation of the message as a function of the nature of the incoming signal (plasticity). For example, if the synapse receives very closely packed pulses of incoming signals, it will transmit a more intense action potential. Conversely, if the pulses are spaced farther apart, the action potential will be weaker.

It is this plasticity that the researchers have succeeding in mimicking with the NOMFET.

A transistor, the basic building block of an electronic circuit, can be used as a simple switch - it can then transmit, or not, a signal - or instead offer numerous functionalities (amplification, modulation, encoding, etc.).

The innovation of the NOMFET resides in the original combination of an organic transistor and gold nanoparticles. These encapsulated nanoparticles, fixed in the channel of the transistor and coated with pentacene, have a memory effect that allows them to mimic the way a synapse works during the transmission of action potentials between two neurons. This property therefore makes the electronic component capable of evolving as a function of the system in which it is placed. Its performance is comparable to the seven CMOS transistors (at least) that have been needed until now to mimic this plasticity.

The devices produced have been optimized to nanometric sizes in order to be able to integrate them on a large scale. Neuro-inspired computers produced using this technology are capable of functions comparable to those of the human brain.

Unlike silicon computers, widely used in high performance computing, neuro-inspired computers can resolve much more complex problems, such as visual recognition.



#46 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 25 January 2010 - 08:59 AM

http://www.physorg.c...s183544566.html

Saw this on Shannon's facebook :p Thanks Shannon!

#47 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 25 January 2010 - 09:31 AM

http://www.physorg.c...s183544566.html

Saw this on Shannon's facebook :p Thanks Shannon!


*facepalms*

So they are FINALLY planning to use a magnetic Z pinch in an effort to make fusion work, but they still want to deny that electrical phenomena occur in space in Astrophysics.

*slams head against table a few times*

#48 ben951

  • Guest
  • 111 posts
  • 15
  • Location:France

Posted 25 January 2010 - 10:17 AM

But it doesn't translate in that way. The 20 pflops from sequoia is reached under rather artificial conditions of a perfectly parallelized and pipelined calculation, while the brain's 20 pflops comes from gazillions of really slow task-specific processors that are highly networked. Sequoia couldn't be smart like a human, at least not without some really great software that doesn't exist yet,

Yes of course I'm not saying that sequoia will be smart like a human without the proper software.
Even kurweil thinks that the software knowledge will lag 20 years behind, but we will have more than enough hardware power by then even if 20 petaflop is a low estimate.

but a human couldn't solve 20 quadrillion multiplies a second, either.

But aren't we solving billions of equations at the subconscious level when we think or move ?

Edited by ben951, 25 January 2010 - 10:18 AM.


#49 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 27 January 2010 - 05:05 AM

And further evidence for the likelihood that "printing" is likely to become the dominant manufacturing paradigm.

http://nextbigfuture...ing-all-of.html

Darpa is investigating dividing the R&D and design aspects of product design from the manufacturing, basically outsourcing all actual manufacturing from prototyping to final product to dedicated manufacturing companies that produce products for multiple, competing clients. Electronics already does this, with Companies such as AMD and Nvidia concentrating on innovation, and using outsourced manufacturing.

The need to create manufacturing processes which are robust enough to produce high volume, while flexible enough to alter production quickly will likely force innovation in the 3D printing technology. HP is already manufacturing a "professional" grade 3D printer for half what others do (10,000) while Makerbot is producing one for 1,000 for the DIY crowd. As RepRap and other opensource projects progress, it's going to be become more and more likely that mainline industries are going to follow suite, especially as the database of freely available objects continues to grow. Eventually even manufacturers are likely outsource the construction part to home units. Design firms will make the device, Manufacturers will create the "source code" for manufacturing the device, and the end user will use a 3D printer to make it after purchase.

K. Eric Drexler disagrees with me to an extent. He believes that high throughput manufacturing using dedicated manufacturing assembly lines will continue to dominate, and in part he's right, Printing isn't going to affect the manufacture of simple things like hammers. But for anything requiring complex assembly or which is improved upon frequently, which printable electronics is going to make include a much wider range of products, printing is likely to become dominant. Rebuilding a dedicated assembly line every generation of new hardware is going to become impossible as generation times continue to become shorter and shorter. As opensource and home based manufacturing becomes more and more common, there will likely be a division between the manufacture of highly standardized modular or stand alone components and uniquely customizable final products.

It's going to be interesting, great for rebuilding the economy, but not really do much for manufacturing jobs.

#50 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 27 January 2010 - 05:49 AM

but they still want to deny that electrical phenomena occur in space in Astrophysics.

What is this electric universe of which you speak? Electrical phenomena have to occur in space, otherwise spacecraft and satellites wouldn't work. I presume that you are talking about something that happens over a large scale. Does it take into account that the energy of an electrostatic interaction falls off as a function of the inverse of (distance squared)? That kind of takes it off the table for really long range effects.

#51 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 27 January 2010 - 07:37 AM

but they still want to deny that electrical phenomena occur in space in Astrophysics.

What is this electric universe of which you speak? Electrical phenomena have to occur in space, otherwise spacecraft and satellites wouldn't work. I presume that you are talking about something that happens over a large scale. Does it take into account that the energy of an electrostatic interaction falls off as a function of the inverse of (distance squared)? That kind of takes it off the table for really long range effects.



humm... why yes... the inverse of distance squared... like... GRAVITY? XPPPPPPP

However, Electrostatic forces are many times stronger than gravity (36^10), which is why magnets have no problem picking things up against gravity.

So... basically, Astrophysics is ignoring the strongest force in favor of the weakest. Magnetism is a result of the flow of electrical current. For a plasma to have a "magnetic field" it must first have an electrical current flowing through it, however faintly. You cannot have magnetic fields WITHOUT that current.

Electric universe theory is basically acknowledging that electrostatic forces are the primary forces in the formation of the universe. It completely eliminates the entire "missing matter" question that plagues gravity only models. Dark matter, dark flow, and dark energy are figments of mathematical conjecture created to fill in for the missing matter which acknowledging electrical forces in space accounts for.

http://www.electricu...fo/Introduction (links to lots of info)
http://members.cox.n...ott-Aug2007.pdf (for those wanting to check the math)
http://www.electric-...rg/indexOLD.htm (a short summary of Electric Universe Theory for the average reader)
http://www.plasma-un...ma-Universe.com ( another links site for further reading)

Simply put, as an electronics technician, with electronic engineering training, the evidence I've examined for the Electrical Universe theory has made a far more substantial case for it's veracity that all the astrophysics I spent years studying prior to coming across this. I've spent 39 years believing the standard gravity centered model, and followed all of it's developments my entire life. It took two days of reading EU theory and reading the case made for it to convince me which had better evidence, and which was asking me to accept things on faith. It's only been a year since I was first exposed to EU, and I continue to find more evidence supporting it. The latest news from Voyager is just one more piece. It's traveling thorough the suns double layer plasma sheath into a cloud of hydrogen gas that Astrophysics says SHOULD NOT EXIST. NASA is explaining it as having a magnetic field that is maintaining it's cohesion, and once again, ignoring that electric current and magnetic fields are inseparable.

Yes, EU is controversial. Yes it is going to mean a HUGE shakeup in cosmic theory. Yes it is howled down by every single true believer in Einsteinian physics and relativity. Yes much of the information on the theory is openly hostile to the current astrophysics community. Yes some of it even sounds paranoid about how it's being "suppressed".

But the evidence can be separated from the vitriolic language, and it keeps mounting. Gravity is only a part of the picture.

And it is the interaction of the Sun's electrical fields with the Earth that drive global climate Niner. CO2 is just a tiny part of it. Every planet in the solar system is experiencing warming. Forget every single fossil fuel counter, EU alone indicates AGW is hype and propaganda.

As I have stated. Belief is meaningless. Evidence is what counts. And mathematical suppositions that continually fail when put to the real world test are not PROOF. Which is why the LHC is just going to be yet another trillion dollar failure.

Edited by valkyrie_ice, 27 January 2010 - 08:03 AM.


#52 Kutta

  • Guest, F@H
  • 94 posts
  • 0

Posted 27 January 2010 - 08:28 PM

As I have stated. Belief is meaningless. Evidence is what counts. And mathematical suppositions that continually fail when put to the real world test are not PROOF. Which is why the LHC is just going to be yet another trillion dollar failure.

Non sequitur?

#53 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 27 January 2010 - 11:05 PM

As I have stated. Belief is meaningless. Evidence is what counts. And mathematical suppositions that continually fail when put to the real world test are not PROOF. Which is why the LHC is just going to be yet another trillion dollar failure.

Non sequitur?



Not really. the LHC is based on the same mathematical figments I was talking about. It's hunting for mathematical abstractions not reality.

But yeah. I was a bit distracted when finishing the post. So it is a little disjointed.

Okay.... a LOT disjointed.

Edited by valkyrie_ice, 27 January 2010 - 11:06 PM.


#54 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 29 January 2010 - 04:02 AM

And back to our regularly scheduled technology talk!

First, from K. Eric: http://metamodern.co...locks/#comments

This post is prompted by a set of interrelated advances in chemistry that hold great promise for advancing the art of atomically precise fabrication. In this post, I’ll describe an emerging class of modular synthesis methods for making a diverse set of small, complex molecular building blocks.

The road to complex self-assembled nanosystems starts with stable molecular building blocks, and the more choices, the better. Self-assembly and the folding of foldamers are similar processes: They work when parts fit together well, and in just one way. Having building blocks to choose from at the design stage will typically make possible a better fit, resulting in a denser, more stable structure.


More advances to drexlerian style nanotech... talked about by the Nanoguru himself.

followed by: http://metamodern.co...tional-control/

Which discusses building control systems for nanomachines.


Now in the world of Terahertz computing: http://nextbigfuture...r-graphene.html

Graphene is considered to be a promising candidate for future nanoelectronics due to its exceptional electronic properties. Unfortunately, the graphene field-effect transistors (FETs) cannot be turned off effectively due to the absence of a band gap, leading to an on/off current ratio typically around 5 in top-gated graphene FETs. On the other hand, theoretical investigations and optical measurements suggest that a band gap up to a few hundred millielectronvolts can be created by the perpendicular E-field in bilayer graphenes. Although previous carrier transport measurements in bilayer graphene transistors did indicate a gate-induced insulating state at temperatures below 1 K, the electrical (or transport) band gap was estimated to be a few millielectronvolts, and the room temperature on/off current ratio in bilayer graphene FETs remains similar to those in single-layer graphene FETs. Here, for the first time, we report an on/off current ratio of around 100 and 2000 at room temperature and 20 K, respectively, in our dual-gate bilayer graphene FETs. We also measured an electrical band gap of >130 and 80 meV at average electric displacements of 2.2 and 1.3 V nm−1, respectively. This demonstration reveals the great potential of bilayer graphene in applications such as digital electronics, pseudospintronics, terahertz technology, and infrared nanophotonics.


FETs are a primary computer component. what they've done is increase the bandgap, ie the resistance the current must overcome to switch the fet "on" or "off" to a high enough factor to use in binary logic circuits, and done so at room temperatures, as opposed to supercooled.

For those of you who are not electronics people, a FET is a three pin switching device. it basically uses a small control current to switch another current on and off. Think of a canal with water. The control current runs the dam. When it raises the dam, water flows, when it lowers it, water doesn't. By using the outflowing water from one canal as the control current for a second canal, you create a simple logic gate. What this advance has done is made the dam big enough to contain a significant amount of water behind it, rather than having a tiny dam that can be overflowed easily.

and it's yet one more major step for making a terahertz computer. And it's within days of the last one. At the rate of advancements we may have prototypes by years end.


And now, in the Electronics Printing newsfront. You will remember my saying that printing touch screen displays would happen soon?

http://nextbigfuture...uch-screen.html

Soon = Now.

Get ready for displays on your cereal box, peoples, cuz itsa comin!

#55 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 28 February 2010 - 08:13 PM

And more info on the advance of graphene electronics:

http://nextbigfuture...bbon-based.html

arxiv - Towards Graphene Nanoribbon-based Electronics (9 page pdf)

The successful fabrication of single layer graphene has greatly stimulated the progress of the research on graphene. In this article, focusing on the basic electronic and transport properties of graphene nanoribbons (GNRs), we review the recent progress of experimental fabrication of GNRs, and the theoretical and experimental investigations of physical properties and device applications of GNRs. We also briefly discuss the research efforts on the spin polarization of GNRs in relation to the edge states.

Experimental Fabrication of Graphene Nanoribbons

* graphene can be patterned by traditional e-beam lithography technique into nanoribbons with various widths ranging from 20 to 500nm
* 10-nm-wide nanoribbon has been etched via scanning tunnelling microscope (STM) lithography.
* with optimal lithographic parameters, it is possible to cut GNRs with suitably regular edges, which constitutes a great advance towards the reproducibility of GNR-based devices
* chemically derived GNRs with various widths ranging from 50 nm to sub-10 nm. These GNRs have atomic-scale ultrasmooth edges


a major step towards full carbon processors. Nanoribbons are extremely conductive, and unlike copper, Graphene Nanoribbons channel all electrons along the ribbon, allowing almost none to escape. Like a Fiberop cable, and unlike copper, two GNs laid side by side with only a 1 nm gap will self insulate, and not crosstalk the way copper does. As noted earlier in this thread, a sheet of continuous graphene laid over a surface that applies stresses to it can even isolate electrons to the stressed pathways, allowing circuits to be made even out of a single unbroken sheet.


and on the VR front:

http://nextbigfuture...ad-mounted.html

The xSight is a professional headmounted display that delivers a previously-unattainable combination of panoramic field of view, high resolution and light weight. It is an ideal choice for a variety of training, simulation and design applications.

• 123° Panoramic field of view provides superb situational awareness, active peripheral vision and enhanced realism
• High resolution: accepts multiple video resolutions (1680x1050 pixels per eye recommended) for greater information content and high pixel density.
• A lightweight (400g)

Read More...


Thats right, High Definition, wraparound (i.e. FULL Peripheral vision) displays. A bit bulky right now yes, but the major features are that wide angle coverage, and that extreme resolution. 1680x1050 is closing in on a pixel density in which individual pixels are indistinguishable to the human eye as it currently exists. and 123 degrees is the average humans viewing arc. Give this a couple of years development with printed electronics, flexible displays, and ultracap batteries and we will have true VR lenses, and the current "3d" technology will be as dead as betamax.

Edited by valkyrie_ice, 28 February 2010 - 08:14 PM.


#56 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 28 February 2010 - 10:15 PM

How does VR help you live forever?
Sure it could be fun but enjoying your life more isn't something I feel very lacking

#57 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 01 March 2010 - 08:58 PM

How does VR help you live forever?
Sure it could be fun but enjoying your life more isn't something I feel very lacking


VR doesn't directly impact longevity, but it does potentially provide better tools towards its research, as well as a social platform to the public which allows exploration of the longevity meme. Be it Highlander style "immortals" or the endless vampire wannabe's, the longevity meme is slowly reaching more and more people. It's slowly infecting our media, our games, and has been in a our literature for centuries. 20 years ago, the number of fantasy shows dealing with long lived individuals was NIL. Now it seems you can't turn around without a new vampire movie or show, or one in which immortals of one type or another exist. And all of them push the same idea. Immortality is a curse if everyone around you isn't. None of these immortals want to die, they all want to not have all those around them die. It's a progressive point of view for the longevity meme, one which is slowly making the idea of immortality something more and more people are contemplating. Especially the younger generations, i.e. those under 40 or so

VR itself isn't and end, it's merely a means to an end. It's a means to force humanity to realize that the future is not going to be the same as the past, and drive public demand for research into fields which will directly impact the quest for immortality, such as bioengineering and nanotech.

In another way, I also see it as a solution for one of the biggest obstacles in the way of effective immortality. The general contempt for human life the human race displays. Our species evolved in a competitive environment, and we had to be able to treat those not of our tribe as "not human", but that is becoming harder and harder as the internet is spreading to more and more people, and we are gradually beginning to realize that people are people. VR is going to force us to directly confront the xenophobia which lies at the heart of that willingness to treat those "not us" as "not human", and while I do foresee turmoil, I think as we become accustomed to the new "virtuality" we're going to abandon many of those old prejudices. Especially as it becomes easier and easier to alter the human body alongside VR. And as we begin to have a larger and larger group of people who view the world as a single unified whole, the value of human life, and preserving human life is going to increase, making longevity research that much more of a priority.

Don't get me wrong, I wish longevity research was the #1 priority now, but I simply don't see if becoming so until we've overcome a lot of the other problems that cause us to so easily devalue anyone else's life but our own.

#58 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 02 March 2010 - 07:56 AM

How can someone at your age be so confident when people at their 20s find it hopeless?

#59 hotamali

  • Guest
  • 49 posts
  • 2

Posted 02 March 2010 - 11:38 PM

How can someone at your age be so confident when people at their 20s find it hopeless?


Because information technologies will converge with life extension to make the whole process go faster. Science doesn't work indepenently from itself. Or at least that's the idea.

#60 valkyrie_ice

  • Topic Starter
  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 03 March 2010 - 11:27 AM

How can someone at your age be so confident when people at their 20s find it hopeless?


That's simple life experience Luna, dear. I've watched this happening for 30 years. As someone who has watched technology speeding up every year, and who started way back on a timex sinclair 1000 and has been involved in computer related activities from the birth of the internet, I've seen the acceleration happening.

It's not a matter of faith. It's a matter of seeing it first hand, and intense observation as it's happening. It's one thing to read about it, or to be told about it. It's entirely different when you've lived through it.

Nothing exists in a vacuum. Technology doesn't develop in isolation, and every advance leads to further advances. Our entire world as it exists today occurred because of the technological developments of the Apollo program, which not only developed the microelectronics we use daily, but advanced medicine, created an institutionalized research industry, and taught the entirety of several generations how to think scientifically. Most people don't realize exactly how much we have advanced in the last 40 years, because they can't step back and see the big picture. Even many experiences futurists in many fields lack that ability to step back and view EVERYTHING. NOTHING exists in a vacuum, and one has to be able to see how things interconnect. Science, politics, and human nature are all parts of the whole, and you can't ignore the bad parts in favor of the good. You can't pick and choose which data to use and which to ignore. You can't assume only positive uses for a technology, or that bad people won't get a hold of it. One man's evil is another man's good, so you have to understand how even the bad can lead to positive outcomes. And sometimes, you have to be quite well aware that not everything is as straightforward as it might seem, because there are always unintended consequences, and the better you can anticipate those, the more likely you are to truly see the big picture.

I am a complete and utter cynic, Luna. I see the human race as few others I know do, as a completely amoral self serving, and self deluded species, and at the same time, I see how that very nature is what will lead us to become a far better species as we progress over the next few decades. Our flaws are the very things which drive us towards a future in which we will over come those flaws, because those flaws work together with technology in ways which make certain developments highly likely, and ensure that the kind of co-ordinated resistance to developing technology which could siderail certain technological trends doesn't develop. I can look at the current economic problems and understand that they were an inevitable consequence of our technological advances, and know that they HAD to occur, because if they hadn't, then the next steps towards the future couldn't have occurred. I knew this kind of turmoil was going to happen 20 years ago, as computers and eventually robots replaced more and more of the workforce. There was no way around it. There still is no way around it. and the job situation is NOT going to improve more than moderately, because many jobs are becoming obsolete, and more and more are going to join them. And that is going to FORCE social changes, regardless of all the attempts to maintain a status quo. The next few years are probably going to get crazy. But there is a light at the other side as we finish transitioning from a industrial age of competing nations and corporations into a true worldwide society.

Yes, we're in a dark and scary forest. But you're too young to remember how much more dark and scary that forest was in the past, when we were all still reeling from the world wars, and the dangers of MAD, and the sheer insanity of the cold war. You can look at Iraq and Afghanistan, and see a few thousand casualties as a massive loss of life in a major war, where I can see a tiny little, far too expensive to fight, bleeding wound of a war, insignificant against the wars fought last century in which thousands of troops could die in a single DAY. The combined casualties on all sides of both the Iraq and Afghan wars is less than the number of troops lost taking Normandy Beach, or during the Vietnam War. It's the fact that people can SEE this war, live on TV more or less, which makes this seem so much greater than it really is. The same with petty dictators like the Iranian leader, or the N Viet Nam leader. They are barely dictators at all when compared to Hitler or Mussolini, or even Noriega, but they loom large simply because they are all that we have left to focus our fears on. Our world is moving out of the age of war and battles, but at the same time those wars are better televised and far more in the public eye than ever before.

The same with Politics. The corruption in Washington has existed for centuries. But it's never before been so in the public eye, and it seems so much worse than in the past solely because we have become so much more aware of it. But like the age of war, it's also an age in it's last gasp, just like Nations and Corporations and the divide between rich and poor. It may take a decade or two, but these things are in the process of dying, even though right now they seem stronger than ever. But unless you can see the greater whole, and understand how interconnected things are, you could easily believe that things are hopeless. You see them tightening their grip, but I see it as the last desperate grasp of a dying entity about to go under for the last time desperately looking for anything to keep themselves alive. They are going to fail, but they may manage to cling to life for another decade or so before the inevitable occurs. Yet even in their desperate struggle for life, they will be bringing about the technologies the will kill them.

The Singularity is not a mystical, magical cure all, Luna. It's simply a point beyond which we can no longer predict with any certainty. What's beyond is not something I am concerned about. But I can see exactly how we will get there, via many many different paths, and I can see that it's not really the Singularity which is really important. Who we will be by the time we reach the Singularity is not who we are now. Every step of the way between now and then is a step in humanities evolution. We're going to have to face our Demons of the ID, our fears and prejudices, our phobias and our flaws, and find solutions to them before we will ever be able to create a "savior".

I do believe that there are ET's out there, I am even willing to contemplate that they may have helped us out somewhat in developing technologies, but it's pretty clear that they are not going to come down in their ufo's and "save" us. No mystical "mind field" is going to occur. No "Point Zero" is going to spark a change in human consciousness. It's going to be a bloody, painful, dangerous road. Like we always have, we'll walk to the future over the corpses of the dead. I'll do what little I personally can to reduce that number as much as I can, but there is no easy way forward, no path which is not paved in blood. Merely paths which in which the body count is lower than others. Sadly, I think we are going to be going right up the middle, not as bloody as it could be, but far bloodier than it HAS to be.

But I can see the end of the bloodstained path ahead, Luna. I can see it drawing closer with every new technological advance. I have watched it draw closer day by bloody day for 30 years. It was so very very far away when I first started watching, but it is getting closer faster with every year that passes. But I also can see that the way to get to the end of bloodstains is going to be hard, difficult, and full of suffering.

But everything has a price, Luna. What you have to ask yourself is if the reward is worth the price. But even more important, you have to truly understand the reward. Immortality is just the tip of an iceberg. It's just one of those pieces which together make up the whole. A world where hunger, disease, poverty, war, and death have all been conquered is also just the tip of the iceberg. It would take me years to detail the big picture I see. But yeah, I think we as a species will find the reward worth the price.

I had my years of hopelessness too Luna. It's pretty typical for the 20 something crowd. I wish I could share with you the exact moment my old worldview shattered, the immense crashing of falling glass that I could feel around me as my mind processed the fact that I had suddenly awoken into a brand new reality, and it raced to put all the new pieces I could suddenly see into place. I wish I could show you exactly what went on in my mind as I, for the first time, managed to step back, and see the forest instead of the trees. I wish I could share with you the moment I found hope again. But all I can really do is tell you that from that moment to now, nothing I have see happen in the world has diminished that hope, and much has occurred to strengthen it.

It's a steep and hard climb up a bloodstained and slippery path to get to the top of the cliff beyond which the Singularity awaits, but we will make it, Luna. Human Nature makes it inevitable. A lot of us will fall and die, but the human race itself is going to reach that Plateau, and leave it's childhood behind. It's not going to be Paradise, and we'll never overcome every problem, because we will always find new problems to solve; we're a problem solving species after all. But we will have left our current problems lying buried along the path, amid the gore and carnage or our primitive past, and move on to the greater challenges of our unknowable future.




9 user(s) are reading this topic

0 members, 9 guests, 0 anonymous users