Life is a very small thing. But life is as life deems necessary. The human perception would indeed be a far simpler one if we were to live as a leaf of an Aspen tree identifies life. This paper is going to be written to help explore where human intelligence lies in comparison to the idea of artificial intelligence. In large part, I presume that human cognizance does not have free will, albeit this paper will also explore the idea of free will in a larger sense. I believe that the adaptation of computer processing, from a biological point of view, is beginning to merge with how and why we think about life. In correlation, I do not think that the notion of artificial intelligence is currently possible.
People, more so the brain of a person, appear to produce a considerable amount of “overhead” processing. Through the processes of our sensory organs detecting external stimulus and our brain rechecking this new data with old information, people conduct a great deal of sifting, scanning, and further identification of the information which we have come to know. We think of things as they occur. If they remain active in our conscience, the processing of that maturing information continues to be processed until a conclusion can be satisfied. Computers on the other hand, the system of which a modern day computer processes, includes its given data and its ability to respond to such data, governed by its privileged application. To explore these two notions in parallel, the application of emotion seems fitting to the identification of how and why people process data.
This world in which we live, it could be anything, and people have trouble dealing with that uncertainty. Computers are programmed to do what we tell them. Both computers and people possess the definition of a goal, however abstractly different they remain. Computers function by auto-building a task list and the resources necessary to compile the available data. This notion of a goal is similar to people’s understanding of goal setting—one must identify the necessary steps in order to take them, and to further streamline this process, we must abide by the resources that we are able to utilize.
The human brain shares a fundamental property with a computer. Our brain appears to work in such a way that is similar to the concept of cause and effect in that any given reaction of our brain is entirely determined by what is excited, and what is not. This notion is the same in application with the utility of binary code in a computer. Processing is coded by what is on or what is off. Excitement is stimulated from influence; computers do not have the ability to experience what is outside of them, until provided by humans, in the form of new programming code or new data to be processed by preprogrammed code. How then does one program artificial intelligence if computers do not have the ability to be stimulated? To further complicate this notion of stimulus, what then determines excitement in a person, from an internal-to-external point of view? Does stimulus alone allow for our idea of what it means to freely will our independent rule? From a complex point of view, perhaps how and why any given person is excited or not excited about any given subject is itself a learned application, a self building application, one which is merely an escalation of survivability. Applying this notion to computers, how then do we program a computer, one which does not have sensory organs, to be scared? Computers do not need the application of emotion if they cannot reason, for there is no currently programmed reason why a computer needs to react to its environment.
People have always been willing to spend an ample amount of resources in attempts to further streamline the way in which we manage all of the increasingly complex information that we must process on a daily basis. This process of learning new ways to do things, do to more without spending quite as much, is presumably something that every person wants. As computers and the tools that we continue to develop to outsource what we need to process evolves, we are increasingly becoming more like computers. Ironically, at the same time, we are putting a considerable amount of resources into making computers smarter by giving computers human-like characteristics; namely, curving the concept of processing to more effectively react to the life of a human, or, artificial intelligence. In retrospect, it would seem that creating artificial intelligence is more so the act of dumbing-down a computer. Making computers more like humans while humans attempt to process more like computers seems to resemble a double helix. However, through the advancement of computer-human interfaces, it is clear that one day this double helix will merge. But in that time, with respect to the development of computer processing, how will a computer actually respond as an intelligent being? Is it possible to create such an entity?
To create a fundamental understanding of what intelligence is, it would seem necessary to proclaim that the natural development of the biological mind supersedes the instant quantification of a computer. Computers can calculate incredibly complex calculations very quickly, while in contrast, a computer currently cannot calculate the answer to “would you kill yourself to save…” unless you were to apply a numerical value to all prospects, and then, if you can even create a formula that is repeatedly correct in its solution. In order to begin to program a computer as having any degree of intelligent process control, it would be required to develop a modularly-integrated, dynamically-evolving baseline be constructed to compare all old and new information to. Biological cells all have a natural, and perhaps, a “default” comfort level—a naturally predefined yet developing instrument to build from. In the sense of a human being, we are a composite of a trillion different, unique, comforts levels, all having to work together to react to our environment.
- What would the human be without problems?
- Is it possible to “be” without motive?
- How do you program motive if the environment is static?