writing chapter 5.5
and here’s the rest of kurt creating the kernel. last night i sifted thru the rest of the research, and did a bunch more, and now i have all sorts of cryostat and superconducting magnets and biologic computers and concepts that make my head whirl. and in the middle of that, i have kurt’s voice coming thru, saying what he’d do with all this research, picking thru the theories and experimental instruments and saying which ones he’s interested in. so in the middle of all these still-in-quotes notes, mainly from wikipedia, tho a few physics sites stand out for information that i can understand, almost.
i seem to be mainly done with this process now. i read thru it all this morning, 14 pages of assembled notes and research, culled from 32 pages, culled from a 300-page reference document. now i will go thru and start to make my arguments and chart the flow of the chapter.
kurt builds a biological computer, using a drop of his blood for the substrate. and his blood happened to be rich in psilocybin and ayahuasca molecules at the time, so it was kurt in his shaman essence that formed the quantum kernel. as a parenthetical.
what does he show them? the quantum computer is like the third policeman, in a box so small you can’t see it etc. too small to be described. not quite hocus pocus, but when snake gets hold of it he’ll turn it into marketing mystique like dwave. he put it in a chip and hooked it to his iphone. stuck it in a bluetooth. is it a little lump under a sticker on the back of his phone? how did he miniaturize it? what kind of lab did he get hold of? he go into ga tech’s lab? high reflector optical coating over kernel. or filter for communication with classical device. what’s inside of kernel? cpu and ram? what other choices do we have.
“in a computer using the Harvard architecture, the CPU can both read an instruction and perform a data memory access at the same time, even without a cache.”
Today, transistors on integrated circuits have reached a size so small that it would take more than 2,000 of them stacked next to each other to equal the thickness of a human hair. The transistors on Intel’s latest chips are only 45 nanometers wide — the average human hair is about 100,000 nanometers thick.
Because electronics depend upon controlling the flow of electrons to work, issues like quantum tunneling create serious problems. These problems force electrical engineers to re-evaluate the way they design circuits. In some cases, shifting to different materials solves the issue. In others, finding a completely new way to build circuits might work.
The power of quantum computing, in an algorithmic sense, results from calculating with superpositions of states; all the states in the superposition are transformed simultaneously (quantum parallelism) and the effect increases exponentially with the dimension of the state space. The challenge in quantum algorithm design is to make measurements which enable this parallelism to be exploited; in general this is very difficult.
This algorithm embodies what seem to be the essential aspects of an efficient quantum algorithm: preparation of a superposed state, then application of unitary transformations in such a way as to take advantage of quantum parallelism and then concentrate the resulting global information into a single place, and finally an appropriate measurement.
There is much scope for further research in all of the areas that I have surveyed in this article. In language design, the use of complex quantum data structures has not been fully explored, nor has the development of high-level quantum control structures. There has been relatively less emphasis on formal semantics for the imperative languages than for the functional languages, and it would be useful to redress the balance. Denotational semantics of higher-order quantum functional computation is still not solved. The area of compiling quantum programming languages has received relatively little attention.
Roughly cube-shaped, and around 10 feet tall, they emit a rhythmic, high-pitched sound as supercooled gases circulate inside. Each of the machines has a door on the side and is mostly empty, with what looks like a ray gun descending from the ceiling, a widely spaced stack of five metal discs of decreasing size held together with cables, struts, and pipes plated with gold and copper. It is actually a cold gun: the structure is a chilly -452 °F (4 °Kelvin) at the wide end and a few thousandths of a degree above absolute zero at its tip, where D-Wave’s inch-square chip can be found. Not even the deepest reaches of space are this cold, or so shielded from magnetic fields as this chip, which is etched at a plant in Silicon Valley from a niobium alloy that becomes superconducting at ultralow temperatures. D-Wave’s processors are also made up of elements that switch between 1 and 0, but they are loops of niobium alloy—there are 512 of them in the newest processor. These loops are known as qubits and can trap electrical current, which circles inside the loops either clockwise (signified by a 0) or counterclockwise (1). Smaller superconducting loops called couplers link the qubits so they can interact and even influence one another to flip between 1 and 0. Performing a calculation on D-Wave’s chip requires providing that raw material, in the form of the numbers to be fed into its hard-coded algorithm. It’s done by setting the qubits into a pattern of 1s and 0s, and fine-tuning how the couplers allow the qubits to interact. After a wait of less than a second, the qubits settle into new values that represent a lower state of energy for the processor, and reveal a potential solution to the original problem. That allows the system of qubits to explore every possible final configuration in an instant, before settling into on the one that is simplest or very close to it. the relatively small number of qubits on the processor today means it can handle only tiny strings of data. Using mathematical tricks to translate a problem into the right form to deal with those limitations, and reversing the process once D-Wave’s chip has given its answer, could cause significant slowdowns. he has engineers working on ways to automatically translate normal programming code into what a D-Wave chip needs.
Many of the regular advances in computing power have come from connections on chips shrinking year after year, but with leading chip maker Intel currently working on making them just 14 nanometers across, there’s not much smaller things can get. “We’re living in the last 10 years of exponential growth of [classical] computing power, and alternatives to that will become more of interest,
rhenium or niobium on a semiconductor surface and cooling the system to absolute zero so that it exhibits quantum behavior. As the Times reports, the method relies on standard microelectronics manufacturing tech, which could make quantum computers easier and cheaper to make. there are several competing methods for making the qubits, including laser-entangled ions, LED-powered entangled photons, and more.
Measurements have shown that graphene has a breaking strength 200 times greater than steel, with a tensile modulus (stiffness) of 1 TPa (150,000,000 psi). Graphene is a substance made of pure carbon, with atoms arranged in a regular hexagonal pattern similar to graphite, but in a one-atom thick sheet. It is very light, with a 1 square meter sheet weighing only .77 milligrams.
Experimental results from transport measurements show that graphene has a remarkably high electron mobility at room temperature. The corresponding resistivity of the graphene sheet would be 10−6 Ω·cm. This is less than the resistivity of silver, the lowest resistivity substance known at room temperature. Graphene is thought to be an ideal material for spintronics due to small spin-orbit interaction and near absence of nuclear magnetic moments in carbon. Electrical spin-current injection and detection in graphene was recently demonstrated up to room temperature. Density functional theory simulations predict that depositing certain adatoms on graphene can render it piezoelectrically responsive to an electric field applied in the vertical (i.e. out-of-plane) direction. This type of locally engineered piezoelectricity is similar in magnitude to that of bulk piezoelectric materials and make graphene a candidate tool for control and sensing in nanoscale devices. Due to the extremely high surface area to mass ratio of graphene, one potential application is in the conductive plates of ultracapacitors. It is believed that graphene could be used to produce ultracapacitors with a greater energy storage density than is currently available. Graphene’s high electrical conductivity and high optical transparency make it a candidate for transparent conducting electrodes, required for such applications as touchscreens, liquid crystal displays, organic photovoltaic cells, and organic light-emitting diodes. In particular, graphene’s mechanical strength and flexibility are advantageous compared to indium tin oxide, which is brittle, and graphene films may be deposited from solution over large areas. Large-area, continuous, transparent, and highly conducting few-layered graphene ﬁlms were produced by chemical vapor deposition and used as anodes for application in photovoltaic devices. Organic light-emitting diodes (OLEDs) with graphene anodes have also been demonstrated. The electronic and optical performance of devices based on graphene are shown to be similar to devices made with indium-tin-oxide. An all carbon-based device called a light-emitting electrochemical cell (LEC) was demonstrated with chemically derived graphene as the cathode and the conductive polymer PEDOT as the anode by Matyba et al. Unlike its predecessors, this device contains no metal, but only carbon-based electrodes. The use of graphene as the anode in LECs was also verified in the same publication. Graphene has the ideal properties to be an excellent component of integrated circuits. Graphene has a high carrier mobility, as well as low noise, allowing it to be used as the channel in a field-effect transistor. The issue is that single sheets of graphene are hard to produce, and even harder to make on top of an appropriate substrate. Researchers are looking into methods of transferring single graphene sheets from their source of origin (mechanical exfoliation on SiO2 / Si or thermal graphitization of a SiC surface) onto a target substrate of interest. According to a January 2010 report, graphene was epitaxially (Epitaxy refers to the deposition of a crystalline overlayer on a crystalline substrate, where the overlayer is in registry with the substrate. In other words, there must be one or more preferred orientations of the overlayer with respect to the substrate for this to be termed epitaxial growth.) grown on SiC in a quantity and with quality suitable for mass production of integrated circuits. At high temperatures, the Quantum Hall effect could be measured in these samples. In November 2011, researchers at Cambridge University demonstrated the feasibility of ink-jet printing as a method for fabricating graphene devices. Several potential applications for graphene are under development, and many more have been proposed. These include lightweight, thin, flexible, yet durable display screens, electric circuits, and solar cells, as well as various medical, chemical, and industrial processes enhanced or enabled by the use of new graphene materials.
Graphene nanoribbons (GNRs) are essentially single layers of graphene that are cut in a particular pattern to give them certain electrical properties. Depending on how the un-bonded edges are configured, they can either be in a zigzag or armchair configuration. Calculations based on tight binding predict that zigzag GNRs are always metallic while armchairs can be either metallic or semiconducting, depending on their width. Zigzag nanoribbons are also semiconducting and present spin-polarized edges. Their 2D structure, high electrical and thermal conductivity, and low noise also make GNRs a possible alternative to copper for integrated circuit interconnects. Some research is also being done to create quantum dots by changing the width of GNRs at select points along the ribbon, creating quantum confinement. Large quantities of width controlled GNRs can be produced via graphite nanotomy proces. Due to its high electronic quality, graphene has also attracted the interest of technologists who see it as a way of constructing ballistic transistors. Graphene exhibits a pronounced response to perpendicular external electric fields, allowing one to build FETs (field-effect transistors). four different types of logic gates, each composed of a single graphene transistor. In the same year, the Massachusetts Institute of Technology researchers built an experimental graphene chip known as a frequency multiplier. It is capable of taking an incoming electrical signal of a certain frequency and producing an output signal that is a multiple of that frequency.
When any two devices need to talk to each other, they have to agree on a number of points before the conversation can begin. The first point of agreement is physical: Will they talk over wires, or through some form of wireless signals? If they use wires, how many are required — one, two, eight, 25? How much data will be sent at a time? For instance, serial ports send data 1 bit at a time, while parallel ports send several bits at once. How will they speak to each other? All of the parties in an electronic discussion need to know what the bits mean and whether the message they receive is the same message that was sent. This means developing a set of commands and responses known as a protocol. Bluetooth can connect up to eight devices simultaneously. With all of those devices in the same 10-meter (32-foot) radius, you might think they’d interfere with one another, but it’s unlikely. Bluetooth uses a technique called spread-spectrum frequency hopping. a device will use 79 individual, randomly chosen frequencies within a designated range, changing from one to another on a regular basis. In the case of Bluetooth, the transmitters change frequencies 1,600 times every second, meaning that more devices can make full use of a limited slice of the radio spectrum. When Bluetooth-capable devices come within range of one another, an electronic conversation takes place to determine whether they have data to share or whether one needs to control the other. The user doesn’t have to press a button or give a command — the electronic conversation happens automatically. Once the conversation has occurred, the devices — whether they’re part of a computer system or a stereo — form a network. Bluetooth systems create a personal-area network (PAN), or piconet, that may fill a room or may encompass no more distance than that between the cell phone on a belt-clip and the headset on your head. Once a piconet is established, the members randomly hop frequencies in unison so they stay in touch with one another and avoid other piconets that may be operating in the same room. When the base is first turned on, it sends radio signals asking for a response from any units with an address in a particular range. Since the handset has an address in the range, it responds, and a tiny network is formed.
the reflectivity can be increased to greater than 99.99%, producing a high-reflector (HR) coating. The level of reflectivity can also be tuned to any particular value, for instance to produce a mirror that reflects 90% and transmits 10% of the light that falls on it, over some range of wavelengths. Such mirrors are often used as beamsplitters, and as output couplers in lasers. Alternatively, the coating can be designed such that the mirror reflects light only in a narrow band of wavelengths, producing an optical filter. Transparent conductive coatings are used in applications where it is important that the coating conduct electricity or dissipate static charge. Conductive coatings are used to protect the aperture from electromagnetic Interference, while dissipative coatings are used to prevent the build-up of static electricity. Another type is the high-reflector coating which can be used to produce mirrors which reflect greater than 99.99% of the light which falls on them. More complex optical coatings exhibit high reflection over some range of wavelengths, and anti-reflection over another range, allowing the production of dichroic thin-film optical filters
1943 threshold logic, computational model for neural networks based on math and algorithms.
1958 perceptron, pattern recognition algorithm based on 2-layer learning computer network using addition and subtraction. exclusive-or circuit (computation couldn’t be processed until backpropagation algorithm created 1975. works for linearly separable data, fails completely for nonseparable data.
1975 cognitron, early multilayered neural network with training algorithm. propagate information in one direction only, or bounce back and forth until self-activation at a node occurs and network settles on final state.
1980 neocognitron and hopfield net used adaptive resonance theory produced bidirectional flow of inputs between neurons/nodes.
mid80s connectionism, parallel distributed processing.
1986 backpropagation network aka multilayer perceptrons using deep learning algorithms.
Historically, computers evolved from the von Neumann model, which is based on sequential processing and execution of explicit instructions. On the other hand, the origins of neural networks are based on efforts to model information processing in biological systems, which may rely largely on parallel processing as well as implicit instructions based on recognition of patterns of ‘sensory’ input from external sources. In other words, at its very heart a neural network is a complex statistical processor (as opposed to being tasked to sequentially process and execute).
In modern software implementations of artificial neural networks the approach inspired by biology has more or less been abandoned for a more practical approach based on statistics and signal processing. The tasks to which artificial neural networks are applied tend to fall within the following broad categories: Function approximation, or regression analysis, including time series prediction and modeling. Classification, including pattern and sequence recognition, novelty detection and sequential decision making. Data processing, including filtering, clustering, blind signal separation and compression. Application areas of ANNs include system identification and control (vehicle control, process control), game-playing and decision making (backgammon, chess, racing), pattern recognition (radar systems, face identification, object recognition), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications, data mining (or knowledge discovery in databases, “KDD”), visualization and e-mail spam filtering.
Computational devices have been created in CMOS for both biophysical simulation and neuromorphic computing. More recent efforts show promise for creating nanodevices for very large scale principal components analyses and convolution. If successful, these efforts could usher in a new era of neural computing that is a step beyond digital computing, because it depends on learning rather than programming and because it is fundamentally analog rather than digital even though the first instantiations may in fact be with CMOS digital devices.
move. blood’s fern pattern altered by psychoacive molecules, made molecular fern pattern copied by kernel as neural network basis of computer.
a biological-, or molecular-based solution to computing, which involves the replacement of silicon-based transistor technology with organic or inorganic molecular and biological material through the process of constructing switches using molecular material that exist in the nanometer size range. The protein, bacteriohodopsin, advances in computing architecture and technology to replace silicon can be found in the use of inorganic substances, such as lithium niobate used in the development of holographic memory, which exceeds the memory capabilities available from the use of bacteriohodopsin.
Nanobiotechnology provides the means to synthesize the multiple chemical components necessary to create such a system. one can engineer a biocomputer, i.e. the chemical components necessary to serve as a biological system capable of performing computations, by engineering DNA nucleotide sequences to encode for the necessary protein components. Also, the synthetically designed DNA molecules themselves may function in a particular biocomputer system. The economical benefit of biocomputers lies in this potential of all biologically derived systems to self-replicate and self-assemble given appropriate conditions (349).² For instance, all of the necessary proteins for a certain biochemical pathway, which could be modified to serve as a biocomputer, could be synthesized many times over inside a biological cell from a single DNA molecule, which could itself be replicated many times over. It also turns out to be non-trivial to program unconventional machines. Not all problems can be decomposed to take advantage of high degrees of parallelism. And (outside of PhD theses) there aren’t a lot of tools to help programmers. Not to mention that many of the problems that require high degrees of parallelism can be attacked with special-purpose hardware — I’m thinking graphics cards here. So basically, cheap and easy beats massively expensive and hard to use.
Cryostats used in MRI machines are designed to hold a cryogen, typically helium, in a liquid state with minimal evaporation (boil-off). The liquid helium bath is designed to keep the superconducting magnet‘s bobbin of superconductive wire in its superconductive state. In this state the wire has no electrical resistance and very large currents are maintained with a low power input. To maintain superconductivity, the bobbin must be kept below its transition temperature by being immersed in the liquid helium. If, for any reason, the wire becomes resistive, i.e. loses superconductivity, a condition known as a “quench“, the liquid helium evaporates, instantly raising pressure within the vessel. A burst disk, usually made of carbon, is placed within the chimney or vent pipe so that during a pressure excursion, the gaseous helium can be safely vented out of the MRI suite. Typically cryostats are manufactured with two vessels, one inside the other. The outer vessel is evacuated with the vacuum acting as a thermal insulator. The inner vessel contains the cryogen and is supported within the outer vessel by structures made from low-conductivity materials. An intermediate shield between the outer and inner vessels intercepts the heat radiated from the outer vessel. This heat is removed by a cryocooler. Older helium cryostats used a liquid nitrogen vessel as this radiation shield and had the liquid helium in an inner, third, vessel. Nowadays few units using multiple cryogens are made with the trend being towards ‘cryogen-free’ cryostats in which all heat loads are removed by cryocoolers.
Although many properties of superconductors can be described in macroscopic terms such as resistivity, heat capacity, critical temperature, etc., superconductivity is at base a quantum phenomenon and several interesting quantum effects arise.
“In 1961, two groups working independently discovered flux quantization – the fact that the magnetic flux through a superconducting ring is an integer multiple of a flux quantum.
The Cooper pairs of a superconductor can tunnel through a thin insulating layer between two superconductors. This is the basis for the Josephson junction which is used in high-speed switching devices.