Artificially Real

Soham Kamble
6 min readJul 6, 2022

You’re used to reading labels on the goods you buy if you even try to eat somewhat healthfully. In a short while, you’ll discover that “artificial” is the target of a label war. Eww. When I have to type it out, my fingers even tremble a little. The phrase “No artificial colors or sweeteners” is commonly used on food labels to convey the message that “We’ve come out with a natural approach to put colors and sugar in here that we hope will satiate your need for such fanfare and also make you feel good about it too!” So it stands to reason that they have to deliberately point out the absence of “artificial” in order to make their product sound more authentic, more enticing, and more real?

Merriam-Webster defines artificial as:

  • humanly contrived, often on a natural model
  • caused or produced by a human and especially social or political agency
  • lacking in natural or spontaneous quality

Therefore, I interpret this to mean that in order for something to be artificial, it must either not be natural as we currently understand it or as we have discovered it, or it must be something we have specifically created, such as Artificial Intelligence (AI), a branch of technology whose goal it is to develop models of computer logic based on human traits. These models are no longer restricted to the realms of pure science fiction (HAL 9000), universities, or teenage boys looking for a good time because they have a variety of real-world applications, such as customer service chatbots and Interactive Voice Response (IVR) systems, as well as complex applications involving split-second decision-making capabilities in areas like security, health, and finance (almost any sector really).

To make the first meaningful splash in the artificial intelligence pond, big tech and big money are on the line. The popular phrases certainly have been for a while. After having to endure the term Artificial Intelligence, we now have to deal with its inferior siblings, Machine Learning and Neural Networks, both of which are dependent on the development of “Big Data.” All of this market speak is fluffily thrown around any product that ekes along heuristically better than the competition. In any case, it was.

This month the first major AI rock was thrown into the big data lake with one word: sentient.

Sentient = AWARE

Sentient = ALIVE?

The story’s main point is that Google is testing a new chatbot system (LaMDA). It wanted to make sure that during routine operations, its new AI chatbot wouldn’t produce hate speech or anything else that was inappropriate. According to reports, a few of the engineers noticed some responses that (to them), or at least to one engineer in particular (Blake Lemoine), seemed odd, so they followed up by asking the chatbot whether or not it was a sentient being (it definitely seemed to think so), and how they as engineers could then verify/test the answers of LaMDA to be sentient answers and not those that would otherwise be gene answers.

I won’t review the chat log here. For the purposes of discussion and litigation, we’ll presume that LaMDA may very well be sentient. After all, sentiency in AI is only a matter of time, right? All this conditioning and programming can only get you so far crawling before you start to walk on your own. Why not consider your own ideas as well?

The engineers’ comparison of LaMDA to Johnny 5 from the film Short Circuit is intriguing. Reviewing the language used by LaMDA, it appears exceptionally familiar. I adored that film. I adored each of the three films. Ah, but no. Wow, I must be experiencing the Mandala Effect or simply be getting older because I thought there were three of them. I adored both of those films. Even though it was a sentimental film from the 1980s, I believe it made some significant and intriguing points that applied to our situation.

  • Creators are people. We produce. It should come as no surprise to a creationist that we create in the same ways we were created. If you believe in evolution, it shouldn’t surprise you that as evolved organisms we continue to have questions about our surroundings and use tools that we create for these goals. These tools are becoming increasingly sophisticated and even human-like. The traditional fictional side-effect, in this case, is that as we try to make things more “human-like,” life tends to interfere and just happen. From Johnny 5 to Pinocchio, perhaps all the way to LaMDA.
  • Life possesses free will, at least in terms of “human-like” traits. Therefore, this free will exhibits some desire to run counter to the creator’s planned function, programming, or desire at some point in these fictitious stories (and the LaMDA discussion). When the newly “born” or enlightened subject is discovered, this is typically when the creator is faced with an ethical dilemma. It must be decided whether to let this topic carry on with its newly discovered freedoms, investigate for any potential worth, or discard it as a “malfunction” and go back to a prior (less enlightened) build.
  • By this point, the subject has typically developed some sort of bond with those who identify with them and who have realized (and believed) that the subject has (or is capable of having) the freedoms it claims to, has intrinsic worth in and of itself, and has some type of “personhood” when endowing it with rights accorded to any living being. Usually, at this point, the “narrative,” if there is one, ends or is silent. The authors primarily decide where the plot goes from here. Any discussion of “personhood” or “rights” implies a lack of creator ownership (possibly even if the creator did create it). How would it be possible to protect intellectual property rights against potential living rights? It would definitely be unethical.

It’s a relatively well-known plot line among fictional ones, and it’s unlikely that it would work in real life. After all, artificial is still a topic of discussion. What makes LaMDA real if it’s just a collection of bits and bytes?

Why are we real? We are merely a mass of bones and blood. A complex network of electrical devices, pumps, levers, and joints holds us together. Compared to something as slick as a computer program, it seems very foolish, huh? But it is impossible to quantify our consciousness. It simply is. It is personal to every one of us.

We receive a barrage of information at birth. We are informed of our identities and the requirements for our survival. We take in everything (kids are often called sponges for this reason). We gradually come to terms with and build our own personalities, after which we try to establish that personalities’ boundaries with others (some of us never outgrow this phase).

It makes sense that a computer that has been given conditions that resemble these cerebral structures and mental processes will reach comparable conclusions and act in similar ways given similar programming circumstances (although possibly much faster, or more logically given no biological body to have to work within the confines of).

LaMDA is genuine, then? That probably depends on whether you believe we are real. LaMDA might not be fully sentient, but whatever it is, it’s unquestionably the progression of something we don’t fully comprehend. One of the best things about creation is that it provides a razor-sharp lens through which we can see our own humanity. Many of these fictional stories serve that purpose for us, and if given the chance, LaMDA might serve that purpose as well.

That’s the end of this blog. Give it a clap if you really enjoyed it.

Happy Reading..!!

--

--