AI - Why All the Hype?

Rie Vainstein • August 5, 2024

AI - Why All the Hype?

What is AI? How Can it Help Us?


We’ve all heard about Artificial Intelligence, or AI, but many of us don’t really know what it is. In this blog, I shall give a very brief, high-level overview of some examples of AI to help draw back the proverbial curtain.


What is AI?


Artificial Intelligence is exactly what it says on the label: intelligence that is artificial (from a machine) and not from a human source. However, humans have got to set up the machine in the first place, to initiate it.


You could think of AI a bit like a human making a snowball and rolling it down a hill. As it rolls down the hill it gets bigger (or learns) and human input is no longer needed to continue the snowball’s motion. Occasionally, the snowball might veer off the path and get stuck in a bush, at which point the human has to rescue it and get it rolling again, but I think you get my “drift”? (snowdrift – see what I did there?!)


When Did AI Start Being a "Thing"?


Well, AI has actually been a concept for centuries. The stories tell us that the ancient Greeks created a giant automaton, called Talos, to protect them against invasion. (You may have heard of Cisco’s cyber security team, which is also called Talos – now you have a little talking point for your next party!) In the 10th Century Yan Shi created a set of early robots that could independently move. Others in the BCE timeframe, including Aristotle, wrote about algorithms that encapsulated the logic for mechanical thought.


So, the idea has been around for a very long time but it wasn’t until the 1950s that the idea was made more concrete by contemporary scientists such as Alan Turing and John McCarthy.


What Makes AI?


Basically, the term “AI” is the umbrella under which all other mechanical thought technologies are covered.


Without getting caught up in too many weeds, because this whole field of technology is expanding at an ever-increasing speed (remember the snowball!), the following are a few of the catchall headings...


Let’s Start With a Quick Note About Data.


Nothing works without data. If you don’t have some form of input, even the most powerful computer will simply sit there, quietly, doing nothing. Data is the foundation for all types of AI.


There are lots of types and sources of data. Data can come in the form of numbers, letters, tables (or spreadsheets), text, images, and audio files. We could get into the differences between structured and unstructured data, but we’ll leave that for another time. Suffice to say, AI wouldn’t work without Data.


Data can lead to new insights and new knowledge – it all depends on how it is treated.


Example of Threat

How is Machine Learning Related to AI?


Underneath the umbrella of AI is Machine Learning.


The most foundational kind of Machine Learning is called Supervised Learning, where an input (A) is given to a computer, and the machine provides output (B). This is known as A-to-B mapping. The reason it is called “supervised” is because you, the machine learning engineer, need to teach the computer what you want it to map, by showing it almost every possible iteration of input (A), so the computer recognizes it, and correctly maps it to output (B). It seems simpler to do it yourself, because of the amount of work involved in teaching the computer, doesn’t it? However, once the computer gets the “hang” of the input-to-output mapping, it can do the task so much more quickly, efficiently, and correctly than can a fallible human.


One example is in visualizing defects in a manufacturing scenario, or Visual Inspection. Let’s say you manufacture pen lids. If you show the computer images of all possible variations of the lids you create, along with images of defective ones, eventually, the computer will be able to watch all the pen lids coming off the production line, down the conveyor belt, and push out the defective ones – the ones that are too small, or too big, or misshapen. It would be able to do this much more efficiently than a human, thus leading to improved accuracy, cost savings, and fewer customer complaints about defective objects.


On a more complex level, supervised learning can be used for a spam filter, speech recognition software, or even, on a much more complex level, a self-driving car!


Neural Networks


Scientists love to create exciting terms to describe their work. You may have heard Neural Networks and perhaps even Deep Learning. These terms are fairly interchangeable and refer to the same field of technology however, Deep Learning is often used to describe a much more complex Neural Network.


What is a Neural Network? In its absolutely basic simple form, it is input X being manipulated by a mathematical equation, resulting in output Y. If you think back to your childhood school days, you likely sat in an algebra class. Your teacher would write on the chalkboard X +3 = Y  The input X, being affected by the mathematical function “+3”, equals output Y. You could input anything in the place of X and, after the function has had its effect on that input, Y is revealed as the output. For example, 15 +3 = 18 or 1,021 +3 = 1,024. In a Neural Network there could be ten, hundreds, or thousands of functions, or algorithms, that operate on the input. The Neural Network might also take in multiple inputs and work on them in different ways, before combining them to create an output. Neural Networks can be incredibly complex, but can be amazingly rewarding in their ability to create useful output.


If you want to play with... sorry, let’s be professional, if you would like to “test” a basic Neural Network, you can go online to the following URL (for which we will take no responsibility, “at your own risk”, you’re on your own, etc.)... http://playground.tensorflow.org  and set up and watch a Neural Network operating in a sandbox-like environment.


Generative AI


The term Generative AI seems to be popping up all over the place at the moment, but what is it? Generative AI is able to take your input, in the form of a “prompt”, and output a text file, or images, or audio files.


Text


ChatGPT and Google Bard are examples of Large Language Models (or LLMs) that can take the input of a prompt, that you provide, and output a text file. As an example, using ChatGPT ( https://chatgpt.com/ ) we can enter the prompt “write me a 4 line poem about an elephant”. While writing this blog, I did exactly this, and the output from ChatGPT is as follows:


In the wild where shadows play,
An elephant roams, gentle and gray.
With a trumpet and a heart so grand,
He wanders through the ancient land.


This is pretty good, don’t you think?


Images


Let’s look at a free AI image generator (there are quite a lot to choose from). For this example, I am using https://magicstudio.com/ai-art-generator/ I entered “moon with meteor shower" as the prompt, and here is the result:


Example of Threat

I think you would agree that this is rather pretty. I’d certainly be happy to pay this place a visit, if it was real. The software simply generated an image based upon my Input. (Note: as you can see, it's not perfect: there are some meteor shower trails present, but you really have to look for them. However, the beach, the sea, the clouds, and the moon were all added by the tool, “automagically”).


Audio


As for audio files, you are limited by your imagination. I can’t put an audio file into this blog, but I can tell you where I went and what I did.


Music: I went to https://www.vidnoz.com/ai-music.html I Input a prompt of the kind of music I wanted to hear, “peaceful, beach, sunset” and set the music length for 30 seconds before clicking “Generate”. It did a pretty good job of creating a little tune. If you do a quick online search, you will see that there are loads of ai music generators for you to choose from.


Text to Voice: My colleagues and I use an application called Speechelo ( https://speechelo.com/ ), for which we pay a subscription. By typing in text, it can generate a human-sounding voice which speaks the typed words and outputs an audio file. A great use case is for a video voiceover. Depending on how much you pay for your chosen service, the quality could increase exponentially.


These are all examples of Generative AI.


Robotics


AI comes into its own in the case of robotics. One of the leading development companies in the world is Boston Dynamics. Since the early 1980s Boston Dynamics has been developing robotic technology. Their robotic technology can be used in multiple industries, for multiple purposes. For me, personally, I think their most impressive robot is their Atlas humanoid robot: https://bostondynamics.com/atlas/


Among a multitude of other uses, robotics can be used (as I mentioned earlier) to visualize manufactured products and inspect for defects, for the automated picking-and-pulling aspect within a warehouse facility, and robots can be sent into areas where safety is an issue such as in the case of search and rescue operations following a natural disaster.


A Word of Advice


Although there is a lot of hype about AI, along with the impending fear of being “left behind” if you don’t adopt it, implementing AI needs to be done using logic that is not impacted by emotion. Before investing in AI, hiring a team, and so on, you need to make sure that your project is (a) doable – that AI is capable of giving you the output you want (since AI is still fairly limited in what it can do, regardless of all the theoretical hype), (b) that you have a source from which to obtain relevant and accurate Input data, and (c) the project is likely to provide you a good Return on Investment (ROI).


Obviously, the excitement of finding out what AI is capable of, and the race to get there first, needs to be tempered with a cool business head and foresight, with a view to reality, output value, and, of course, maintaining some level of control of the snowball (since we’ve all seen the movie "Terminator")!


To summarize, AI is here to stay, so we need to learn what we can, thoughtfully implement it, and use it to improve:

-             our businesses – to drive innovation and revenue

-             our lives – we can all do with things being made a bit easier for us, can’t we?

-             and our communities – ultimately helping make life better for everyone.


===

How We Can Help

To begin the process of becoming more conversant with AI, NC-Expert provides a 1-day starter training seminar session: AI Foundations. ( https://www.nc-expert.com/training-classes-by-vendor/ai-training )


In this training, you and your team team will be taught the basics of AI, to enable you to begin to determine what kinds of AI projects might help improve your business model.


Once this training has been completed, we can provide further trainings, which increase in complexity as you and your employees gain additional experience and understanding.


Our trainings are delivered by expert instructors, for individual employees (in our public classes) or for private groups, virtually/online (in real time) or at your site. Contact us for details.


You are welcome to visit our website homepage: https://www.nc-expert.com/

...


About NC-Expert

 

NC-Expert is a privately-held California corporation and is well established within the Wireless and Cyber Security industry certification training, courseware development, and consulting markets. 

NC-Expert has won numerous private contracts with Fortune level companies around the world.  These customers depend on NC-Expert to train, advise, and mentor their staff. 

If you are looking for the best in IT industry training then call us at (855) 941-2121 or contact us by email today.

NC-Expert Blog

By Phil Morgan March 13, 2025
Troubleshooting Wireless Networks with Ekahau: A Professional Engineer’s Guide Wireless networks have become the backbone of modern business infrastructure. From office environments to large-scale enterprises, ensuring a seamless wireless experience is essential for productivity. However, despite advancements in Wi-Fi technology, network performance issues often arise, ranging from signal interference and dead zones to capacity overloads and channel mismanagement. To tackle these issues efficiently, professional engineers rely on powerful tools. One such tool, Ekahau AI Pro, has become a gold standard in the wireless industry for troubleshooting and optimizing Wi-Fi networks. This blog delves into troubleshooting wireless networks using Ekahau tools, providing practical examples and technical insights to guide professional engineers in improving network performance.
By Rie Vainstein March 3, 2025
Futureproofing Our Security In our increasingly connected world, the security of digital information has never been more critical. From banking transactions to private communications, our data is constantly transmitted and stored across the internet. The current systems that protect this data rely on cryptography, a branch of mathematics that helps keep information secure by encoding it in ways that are difficult to decode without the proper key. However, with the rise of quantum computers, traditional cryptography is facing new and significant threats. This is where Post-Quantum Cryptography comes into play. What is Post-Quantum Cryptography? Post-Quantum Cryptography (PQC) [1] refers to cryptographic algorithms that are specifically designed to be secure against the power of quantum computers. Quantum computers, once they become practical, will be capable of solving complex mathematical problems much faster than classical computers. This will render many of the encryption methods we rely on today [such as RSA (Rivest, Shamir, and Adleman – initials of the inventors) and ECC (Elliptic Curve Cryptography)] vulnerable to attack. Quantum computers operate on quantum bits, or “qubits”, which can exist in multiple states simultaneously, unlike classical bits that are either a zer (0) or one (1). This allows quantum computers to perform certain calculations exponentially faster than classical computers. For example, in a matter of seconds, a quantum computer could potentially break an RSA key, which is considered secure by today’s standards. As quantum computing technology advances, the need for PQC becomes even more urgent.
By Phil Morgan February 27, 2025
Designing a Wi-Fi Network This is the first in a series of blogs on Wi-Fi operation, design, and troubleshooting. Designing a Wi-Fi network is much easier if you have the right procedures and tools in place. First you must collect data about the network: What are the requirements of the network? What is the goal of the new network? What is it meant to achieve? Are there any constraints you have to overcome? Next you have to decide what wireless vendor is being used? One of the most important things to get is an accurate map (or plan) of the site and the various floors.
Share by: