Saturday, March 14, 2020
The PC of the future Major developments in the hardware and software Essay Example
The PC of the future Major developments in the hardware and software Essay Example The PC of the future Major developments in the hardware and software Essay The PC of the future Major developments in the hardware and software Essay To the present computers only they have left two generations more to be able to continue being at the same time smaller and more powerful, the two generations that calculate that they allow the present technologies of miniaturization of its basic circuits. The perspective of not being able to maintain this tendency does not please anything to the physicists and computer science technicians, reason why, supported by the great companies of the sector, are looking for new approaches completely for the computers of the future. No of these approaches appears simple but all are suggestive, although to risk to imagine one of these computers molecular, quantum or from DNA is still premature. Whatever it buys a computer nowadays knows that it will be obsolete in a pair of years. Now we give by seated the inexorable increase of the power of the computers. But that cannot follow eternally thus, at least, if the computers continue being based on the present technologies. Gordon Moore, cofounder of Intel and one of gurus of the technology of the information, anticipate that the existing methods of miniaturization only will offer two generations more of computers before its capacity is exhausted. In 1965, Moore made a prediction that was confirmed with amazing precision in the three following decades: the power of the computers would duplicate every 18 months. This increase has been due mainly to the more and more small size of the electronic components, so that every time a microprocessor or chip can be introduced more of them in. A modern chip of only half square centimeter contains many million tiny electronic components like the transistors. Each one measures less than one micron of diameter, more or less the hundredth part of the thickness of a human hair. These components are done basically of silicon, that the electricity leads, and of silicon dioxide, that is an insulator. In order to record cards of circuit in silicon microprocessors a called technique is used at the moment photolithograph, by means of which a polymer film forms on the layers of silicon or silicon dioxide that takes the scheme of the set of circuits. The pattern of the circuit records itself in the film of polymer exposing it to the light through a mask. Next chemical substances of engraving are applied that corrode the silicon material no protected. Limitation The size of the elements that can be created by means of this procedure is limited by the wavelength of the used light to fix the pattern. At the moment, they can get to only measure one-fifth part of one micron. But to create still more small electronic components up to one tenth part of one micron of diameter the manufacturers of microprocessors they will need to decide on a radiation of a shorter wavelength: the ultraviolet light of smaller length, x-rays or the electron beams of high energy. The great ones of the computers have still not been agreed on what class to choose, but, in any case, the costs of the development of the new technology and the later variation of the production process will be enormous. IBM, Motorola, Lucent Technologies and Lockheed Martin have been forced to collaborate in the development of the x-rays lithography. But the miniaturization is not limited solely by the photolithograph. Although can be devised methods to make transistors and other devices of a still smaller size, will continue working effectively? The law of Moore anticipates that, for year 2002, the smallest element of a silicon transistor, the insulator of the door, it will have a diameter of only 4 or 5 atoms. Will continue providing the necessary isolation this so fine layer? This question has been investigated recently by the physicist David Miller and his companions of Lucent Technologies. They used manufacture technologies outposts to obtain a silicon dioxide film of a thickness of 5 atoms that introduced between two silicon layers. In comparison, the commercial microprocessors have insulators of about 25 atoms of thickness. Miller and its companions discovered that its ultra thin insulating oxide no longer was able to isolate the silicon layers. The investigators calculated that an insulator of an inferior thickness to 4 atoms of wide would have so many losses that would be useless. In fact, due to the limitations to make smooth films, perfectly even insulating with the thickness double they would begin to break it if they made with the present methods. Therefore, the conventional silicon transistors will have reached their minimum operative dimensions in only one decade more or less. Many computer science technologists affirm that, at the moment, the silicon is what there is; but he can that what there is finishes soon. On the other hand, to try to imagine the computer of the future is to risk seeming as absurd as the science fiction of the Fifties. Nevertheless, judging by the present dreams of the technologists, we will be able to do without the plastic boxes and the silicon Chips. Some say that the computers will be looked more like organisms; their cables and switches will be compound of individual organic molecules. Others speak to practice computer science in a water bucket, sprinkled with fibers of DNA, the genetic material of the cells, or enriched with molecules that manipulate data like answer to the vibrations of radio waves. A thing seems safe: so that the computers have power more and more, their components, the basic elements of the logic circuits, will incredibly have to be tiny. If the present tendency to the miniaturization persists, these components will reach the size of individual molecules in less of a pair of decades, since we have seen. The scientists already are examining nanotubes called the carbon molecule use like cables of conventional molecular size that they can be used to connect component of silicon of solid state. The nanotubes of carbon can measure only a few millionth of millimeter, that is to say, few nanometers, that are equivalent to less than one tenth part of the diameter of cables smaller than they are possible to be recorded in the silicon Chips commercial. One is hollow pure carbon tubes, which are extremely strong and have the added attraction of which some of them lead the electricity. The scientists of the Stanford University in California have cultivated from nanotubes gas carbon methane that connect two terminals of electronic components. But the connection of cables is the easy part. Can the molecules process binary information? That is to say, they can combine sequences of bits (and zeros codified like electrical impulses in the present computers) like the doors logics composed of transistors and other devices of the silicon Chips? In a logic operation, some zeros and combinations in the entrance signals generate other combinations in the exit signals. This way, the data are compared, ordered, added, multiplied or manipulated of other forms. Individual molecules have carried out some operations logics, with the bits codified not like electrical impulses, but like impulses of light or other molecular components. For example, a molecule could unload a photon a luminous particle if it received a loaded metal atom and a photon of a different color, but not if it received only one of both. Nevertheless, nobody has a real idea of how connecting these molecules to a trustworthy and complex circuit that serves to calculate, an authentic molecular computer. Some detractors say that molecular computer science never will be viable. Calculations with DNA At the beginning of the Nineties, Leonard Adleman, of the University of California of the South, it proposed a form different to use molecules to calculate, and indicated that the data base of the own cell the DNA it is possible to be used to solve calculation problems. Adleman realized which the DNA basically a chain of four different molecular components or bases that act as a code of four letters of the genetic information is looked remarkably like the universal computer postulated in the Thirties by the mathematical genius Alan Turing, who stores binary information in a tape. Different chains from bases can voluntarily be programmed in synthetic DNA fibers using the techniques of the modern biotechnology; and later these fibers can be generated, be cut and be assembled in enormous amounts. Could be used these methods to convince to the DNA that it calculated like a machine of Turing? Adleman saw that the system of the DNA could be specially apt to solve minimization problems, like for example finding the route shortest to connect several cities. This kind of problems is one of which it more costs to them to solve to the conventional computers, since the number of possible routes increases very quickly as more cities are included. A current computer takes much in examining all those options. But if each possible solution is codified in a DNA fiber, the problem does not seem so terrible, because a simple one even picks of DNA contains many trillions of molecules. So that only it is necessary to separate the DNA fibers that they have codified the best solution. This can be done using biotechnological methods that recognize specific short sequences of the bases of a fiber of ADN. This procedure is not more than a slightly little orthodox form to find a solution: in the first place, to find all the solutions possible and later to use operations logics to choose the correct one. But, as everything happens parallelly all the possible solutions are created and examined to the same time the process it can be very fast. The calculation by DNA has been demonstrated in principle, but it has still not been proven that solves problems that a conventional computer cannot solve. It seems more apt for a quite specific set of problems, like the minimization and the codification that like method of calculation for questions of all type. The quantum world Already in the Sixties, some computer science scientists noticed themselves of where he took the miniaturization to them: towards the quantum kingdom, where the non-logical rules of the quantum mechanics govern the behavior of the matter. As the conventional devices of the circuits become smaller, the quantum effects become a more and more important aspect of their behavior. It could be feasible, were asked, turn this possible complication an advantage? This suggestion gave fruit in the Eighties, when the physicists began to observe kindly how he could operate a computer under the influence of the quantum mechanics. What they discovered was that it could win enormously in speed. The crucial difference between processing information in the quantum world and the classic one is that first he is not black and white. In a classic computer, all the bits of information are or a thing or another one: or a 1 or a 0. But a quantum bit, qubit, can be a mixture of both. The quantum objects can exist in a superposition of states that is classically exclusive, like the famous cat of Schrà ¯Ã ¿Ã ½dinger that is not nor alive, nor dead, but in a superposition of the two things. This means that a series of quantum switches objects in defined quantum states good, as atoms in different states from excitation have enough more configurations of qubits than the corresponding classic series of bits. For example, whereas a classic memory of three bits can store only one of the eight possible configurations of and zeros, the corresponding quantum series can store the eight, in a superposition of states. This multiplicity of states gives to the quantum computers enough more power and, therefore, enough more speed, than to its classic companions. But, in fact, to shape these ideas in a physical device supposes an extraordinary challenge. A quantum superposition of states is a thing very delicate, and difficult to maintain, mainly if it is extended by an enormous set of logical elements. Once this superposition begins to interact with its surroundings, it begins to collapse and the environs lose the quantum information. Some investigators think that this problem will return quantum computer science to great scale in which great amounts of data are manipulated in multitude of steps impossibly delicate and difficult to handle. But the problem has been lessen in the last years by the development of algorithms that will allow working to the quantum computers, in spite of the small errors introduced by this type of losses. MAJOR DEVELOPMENTS IN THE SOFTWARE Introduction Software Engineering is not a 100% science. All the algorithms are made after the logical, the political and the personal surroundings of the programmer. To talk about the future of software, we have to know a few historical facts. After that, we will have to choose our side of the software wars, between those who defend the open source code policy, and the close source policy. The Software wars Internet would not exist without free software. In the years the 60 Bell labs already yielded the source code of his just invented Operating system UNIX, and from that time last to the last version of the Linux nucleus, the history of software has been based on the exchange of information. The fundamental base of the revolution of the society-network is that interchange that is constructing the movement of the Open Code. A field of the technologies of the information and the communication is free software that surely does not have decrease problems. It is a movement that every time is become greater and than it has had in these last years an extraordinary advance. The statistics usually are eloquent. The last year a 50 percent of the software developers already had thought migrating their developments to Open Code. As powerful applications as the suite of computer science Star Office de Sun or the technology of servant in streaming of Real Networks have served like motor tractor of so many other known applications less than also they are being directed towards the free development of his code. The force of this revolution in computer science and the telecommunications is represented by values and a philosophy unknown until the moment. It is the force of the community and the work in group after resolving tasks and objectives that they acquire of by himself a special value in the developers, which are compensated of a no-pecuniary form that was unsuspected until now at the time in which already the protestant ethics has prevailed anywhere in the world western and the values of the work that takes prepared. Students of the technologies and their implications like M. Castells, R. Stallman, P. Himannen, L. Torvalds and Jesus G. Barahona speak to us constantly of the possibilities that open homo digitalis to him to the future reach more knowledge in thanks to the adoption of agreed policies with the founders of this movement based on sharing the code and the knowledge by the mutual good. The movement represented by the Free Software Foundation is something that goes beyond the mere election of policies of development of new Technologies of the Information and the Communication. When bet by the development in opened code, the adoption of standards and the support to free operating systems, is being affected the knowledge of the members of the digital society and not in the mere support to the consumption of computer science by the fact that it is acceded to his use, immediately. In a digital society the use is so important as the knowledge of the tools and the development of these since he is this indeed what gives to be able to the citizens and the organizations. With the adoption of computer science policies based on free software knowledge of networks and code also occurs to the users, with which they can take a fundamental paper from nonpassive actors in the digital revolution. But everything what represents east movement is not compatible with the policies of the great company that nowadays exerts the worldwide control of computer science (Microsoft). The company of the State of Washington is being with most of the souls of the users worldwide population of the Network and the tools of office software and operating systems of workstations. He is this something undeniable, like also it must be for the administrations the systems by which these companies are going to remove data from the users to create profiles and data bases that to knowing where they will finish someday. Being the one of Redmond (Microsoft) a company of a nation that prohibits the safe encriptation to 1024 bits for its subjects, how we are going to have the users of the planet confidence in the security policies that apparently are going to us to sell. And that is thus, although until the own Department of Defense of the U.S.A. trusts in the Open Source and its systems of encriptation, an d it uses itself them. On the contrary, the one that already has proven version XP of Microsoft well knows what is the control via network of the data of the user and its number MAC of computer. And before it, little people have left to fight against those policies. The networks of laboratories of hackers and other groupings of people who affect the education of the free software tools which they are based on the knowledge necessary to maintain servants, to publish without censorship, to develop programs, to give courses of computer science, etc. are an alternative that already is giving its fruits. Gurus of the digital era has full name that goes united to these movements in some stage of their life. The father of all this form to think is Richard Stallman and the most well known image is the one of Linux Torvalds, who not long ago occurred the prize him to the best European industrialist. Linux and Richard are the pieces key in all this revolution based on the freedom, the work in-group via network and the pure satisfaction by the made work. The competition which they can exert certain companies of little is going to be worth before this movement which it is essential for the technologies based on the side of the servant, and will have to ente r itself in him, since it has made IBM and Sun to begin to include/understand his potential and to remove benefit from it. To put a simile, we imagine that the community of doctors and medicine investigators worldwide worked in network sharing their knowledge at any moment of the day and received as it compensates the solution at the moment to all the problems that appeared to them. With this system many of the present diseases would be already for a long time eradicated. In addition, in this example, the professionals with great tied pays to policies of maximum secret in the laboratories of investigation little would have to say before the greater force than she acquires the movement developed by the Network and that the knowledge has its base in sharing. Those that the difference between languages and free development systems like PHP know about, ZOPE, Perl, etc know well until where it is possible to be arrived with the free code. However, those that only know proprietary and clo sed technologies hardly will be able to get to watch the future, since they go to a technological slavery. Conclusion Computer science is a complex science but of which people create. Much people do not know that it is a science with two branches different one from the other, but employees. The architecture of the computer and software to be able to use it are very important. But all their possible uses are so many, that specialists are needed, like in the medicine, of each one of their parts. There per 1947, when the transistor was invented, and when Jaquard (1804) designed a loom that performed predefined tasks through feeding punched cards into a reading contraption; nobody imagined how quickly that it would take to get the nowadays supercomputers.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.