Generative AI 101
With everyone talking about new tools like Chat-GPT over the last 10 months or so you might be wondering – what exactly is generative AI?
What is Generative AI?
Generative AI is a subset of artificial intelligence (AI) that focuses on the creation (or generation) of text, images, data and audio. This revolutionary step in computer science was made possible by using neural networks. Neural networks are a method of processing data inspired by the human brain – using nodes in a layered structure to perform what’s known as deep learning. Deep learning is used to process data, recognize patterns and then produce highly-accurate predictions based on those patterns.
In other words, generative AI uses a type of machine learning that doesn’t need to be explicitly programmed. Instead, deep learning and natural language processing (NLP) are used to create large language models, or LLMs. These models are trained using massive data sets which can then be prompted (via a text command or file upload) to create content similar to what the model observed in the training data. These prompts are also then added into the training data – allowing the model to further refine itself with use.
So, what is gen AI used for?
Generative AI Use Cases
Generative AI can be used for an assortment of tasks and use cases continue to grow as the technology advances.
Currently generative AI use cases fall into three major categories:
- Creating content and brainstorming ideas
- Automating tasks and increasing efficiency
- Personalization at scale
Generative AI tools are great for brainstorming content ideas, creating article outlines and even creating first drafts. Similar to traditional writing prompts, generative AI is great for overcoming a creative slump. However, we strongly recommend having a human author create the final version of any important content (e.g., website copy).
Automating tasks is also a breeze with many generative AI tools. In some cases, you may not even notice you’ve been using generative AI in the automation of tasks. For example, gen AI is used in many email programs to help suggest commonly used phrases and sentences.
Another use case for generative AI is mass personalization or personalizing content at scale. Tools like Chat-GPT could be helpful when trying to personalize a message with more than just a few placeholders (i.e., for the recipient’s name). Business owners looking to add a personal touch to their newsletter or a recent graduate looking for a hand with thank you cards could both benefit from using generative AI.
Popular AI Platforms & Tools
Some of the most popular generative AI tools include:
- Google Bard
- Google Search Generative Experience (SGE)
- Bing Chat
- Github Copilot
- Photoshop (generative fill)
Another honorable mention we’d like to include is Genmo AI. It’s not on many top 10 lists for AI tools, but we’ve had a great experience using it at Nuaveu.
There’re countless generative AI tools not mentioned here and more are being launched each day. Many products and services you’re already familiar with are likely planning for AI integrations as well (e.g., self-driving cars and video games using AI for NPCs).
Prompts, Engineering & Tuning
So, you’ve decided to try out a generative AI tool – now what? Now comes the fun part – prompts. Prompts are the inputs given to generative AI tools which provide them with parameters for generating content. While prompts are often text inputs, they can also be file uploads such is the case with many image and video generation tools. Some tools also use multiple prompts (e.g., a file upload coupled with text inputs) to generate content. Some tools like Chat-GPT also have settings for custom instructions and/or parameters to be considered.
Different gen AI tools require different types of prompts and some tools may have restrictions on what types of tasks they can do. For example, Chat-GPT provides responses for “what should I consider when shopping for car insurance” whereas Google SGE doesn’t (screenshots below).
Prompt engineering is the process of clearly crafting instructions, or prompts, to guide a tool’s outputs towards a specific outcome. Prompt tuning is the process of optimizing and adjusting prompts to improve the LLM’s performance for a specific task. NVIDA’s introduction to LLMs does a great job of illustrating how prompt engineering results in better prompts while prompt tuning (or p-tuning) results in better LLMs.
Generative AI breathed new life into age old ethical debates surrounding the use of technology to magnify discrimination and manipulate public opinion. Search engines often deny liability for their results claiming it’s a reflection of the internet and society. Will the creators of generative AI tools be able to do the same?
Gen AI has also reignited debates over publishing rights and the liability of tech companies regarding the content they serve. Google in particular struggled with citing results generated by Bard and SGE during initial testing. And some results displayed during tests have been incredibly troubling. For example, Google SGE has received a lot of criticism during testing for displaying results calling Hitler and Zedong great leaders from history.
As many pundits and experts have already pointed out, the ethical use of AI begins with setting principles and the ethical training of models. While many companies, like Google, have committed to AI principles – we still believe there is a long way to go. For example, Google’s first principle is “Be Socially Beneficial” which they claim will be done by weighing a broad range of social and economic factors when considering the development and/or use of AI technologies. It’s unclear what framework will be used to determine what is considered socially beneficial – limiting value of the principles in place. It’s important that as many stakeholders be involved in the AI conversation as possible because vague principles and self-regulation are a recipe for disaster.
Every organization imaginable – from public schools to publicly traded companies – are establishing rules on the use of generative AI. From our observations most organizations tend to lean toward limiting the use of generative AI. One reason many companies have limited the use of generative AI for employees at work is the risk of trade secrets being exposed. The LLMs that generative AI tools are built on effectively store conversations and information shared with it to further train and refine the model.
For example, if the CEO of Coca-Cola asked Google Bard what it thought of the secret Coke recipe – the recipe will have effectively leaked into the public domain (in theory, another user could then ask Bard what the recipe for Coke is and get the correct answer).
Generative AI is quickly transforming the world we live in, but there’s still a lot of kinks to work out. It’s crucial to educate and include as many participants as possible from all walks of life as we navigate these uncharted waters. Hopefully this article helps you understand and contribute to the AI revolution. And if you have questions or need assistance in leveraging generative AI for your business, don’t hesitate to reach out to our experts. We’re eager to help you navigate this exciting AI frontier.