Artificial Intelligence: Everything You Needed To Know Yesterday.

Dheeraj
6 min readOct 23, 2022

--

Artificial Intelligence. Easily one of the most arcane subjects to the figurative layman. As a high school student once hopelessly captivated by AI, it didn’t help that I was inevitably daunted by how little anyone in my life knew about AI. It was something I’d read enough about at that point to know it spelt a natural evolution of man, of practically everything.

In my interactions with people, I’ve found that the uninitiated either tend to have strange, amusing adaptations about what AI is or have no idea at all. However regrettable, it is understandable because this perception gap that’s now rampant is as history makes precedent.

“An invention has to make sense in the world it finishes in, not in the world it started.” ~ Tim O’Reilly

Whenever innovative novelty has ever made it to the public eye, we’ve always displayed a pattern of opposition. According to a recent global survey, nearly 80% of people are, at some level or in some way, fearful of AI technology. Originality has always had a way of dragging along fearful trepidation as its consequence. Be it in the cases of vaccines and lightbulbs too, we have never taken easily to things we don’t comprehend. Our natural response has always been to vapidly meet it halfway with resistance.

While it’s unsurprising, it is still pretty unfortunate. AI is tech that our world is already overbearingly laden with — it’s quite literally everywhere, you just don’t see it. From being behind the content of the ads you see every third second, behind your phone’s facial recognition features, and Snapchat and Instagram filters, to even the search engines that probably led you here. In the past decade, AI has silently blown up multifold in its application; little do most people know that it’s ever closer to turning the corner on a market value of approximately $190 billion.

Its global patronage by governments and private corporations alike grows multifold by the second. It’s seeping into more industries every day; with a keen eye, you’ll soon see it in the places you’ll least expect it to be too — education, healthcare, and even governance. Its sheer power is something that heralds too much for AI to remain unseen. A more tangible example of its capacity is something taking the public by storm right now — it’s a little something called AI-generated artwork and this particular one was made by an AI model called “Stable Diffusion”, released as of 2022. Is your mind blown just yet?

Make no mistake that this is still barely even the tip of the iceberg, and you need to know everything about it. You’re not a citizen of a world that will soon see the widespread expansion of Artificial Intelligence, you’re the citizen of one that already has.

I’ll break it down for you.

Underneath the apparent threat of its power that’s mostly unfounded, AI is a beautiful concept. It also happens to be pretty abstract, think of it as what qualifies a machine to be “intelligent.” Try to answer that yourself.

The answer that consensus settled about, is any technology that possesses the capability to mimic and simulate human cognition, with the strongest emphasis on the ability to learn.

Underneath the umbrella of that definition, you have machine learning which happens to simply be a subset or branch of AI. ML is a phenomenon that tends to be employed through algorithms known as machine learning algorithms — models that “learn” from historic data to make successful predictions in the future.

The cornerstone of ML is that it does not need to be explicitly programmed — it learns correlations and makes patterns from the data you feed it. Human intervention or even presence is intended to be unnecessary with machine learning. It’s pretty important to know that, you can also generally imagine that this learns the same way we do — improving automatically as given more to go off of.

Okay, so how does it work?

There’s no one model — every use case of ML is unique and so is the way every model “learns.” It should be rather obvious that ML is entirely data-driven, being fed voluminous amounts of data from which it makes statistical correlations and connections within entities; it then uses this to serve its purpose of being a predictive algorithm.

You give it data with the output to learn from. You then give it data without the output for what the output should be. In most cases (although with some exceptions), it’s as simple as this.

Put simply, here’s how it works. We feed it data that would be its input as well as what would typically be its corresponding output or what’s known as the label. Take a medical example — let’s say you need to evaluate some things about a cardiac patient and diagnose whether or not their heart will fail (which is the label in this example). Here, you’d give the ML model the readings of the patient as well as if the heart ultimately failed.

The model thus “moulds” itself (by shifting its parameters) to come as close as possible to predicting successfully (whether congestive heart failure occurs) in cases where the output isn’t given — which would be the case if you had an actual patient. It strengthens its predictive ability by establishing connections between the variables in the dataset — age, sex, and whether the patient is diabetic, for instance. These are the features of the data or the variables that the ML plays with. It learns intending to simply minimise error or deviation from reality. While there are exceptions, this happens to be the crux of how ML learns.

Now, you pass the features of the characteristics of some real patient, this data passes through the “filters” of its parameters to finally give you output.

Is that it?

This explanation has its caveats (like the fact that it’s a simplification) though it does come pretty close to how machine learning works. For one, ML doesn’t just do one thing, it has a myriad of applications.

  • classification — taking data with labels and defining it as one of some predefined, finite categories. An example of classification would be the medical example of heart failure.
  • clustering — taking raw data without labels and clustering data points by similarity, hence finding patterns. An example would be trying to find patterns in features of unlabeled data of consumers like age and income, or trying to find things in common with your customers. You group data points together, that’s clustering.
  • regression — establishing the relationship between one continuous, independent and one or more dependent variables. Simply put, you use a bunch of values to find another. A popular example of regression is taking one or more properties of a house to assess its value; you feed it characteristics of the property like number of bedrooms, for example, and after training, you get a value for the house!

Every single one of these cases, depending on the application, type of dataset, its expected output, et cetera requires its own types of implementations of AI. The subsequent number of algorithms out there is mind-boggling, and each one’s different from the next.

Where does it end?

It just… doesn’t. Just imagine, this itself is only a rather simplified introduction and artificial intelligence is also only in its infancy. Some massive consequences of AI are the implications of its application in fields like law and art, which demand their own discipline of ethics — a vast, uncharted territory that’s nothing short of bewitching. What’s pretty intriguing and is a notable interest right now, is how the whys behind a decision of an AI are inexplicable. The engineers that make an AI themselves cannot represent the parameters of an AI as definite answers — this is now constituting its own study called Explainable AI. This is to say that, no one can say why AI does a certain thing or what its reasoning is but rather than it simply does because it’s been taught to.

And then there’s the complexity of the implementations of AI with auditory datasets, NLP (natural language processing), and computer vision. Of course, you also have regular neural networks, CNNs and multitudes of other instances of AI that grow multifold by the day. Developments break ground every day. Its ability is infinite, yet is its scope and progression.

It’s like I said, merely the tip of the iceberg. There’s so much more to come.

Stay tuned. Stay aware.

--

--

Dheeraj

just another amateur enthusiast finding their feet in a journey they yearn to share with another.