Notification

Users can now migrate Google Podcasts subscriptions to YouTube Music or to another app that supports OPML import. Learn more here

Learn about generative AI

Generative Artificial Intelligence (AI) is a type of AI that can help you create content. It can help you be more creative, productive, and knowledgeable.

In this article, you can learn about generative AI, including:

  • What generative AI is and how it works
  • How to use generative AI and evaluate the accuracy of its responses
  • How Google develops AI

What is generative AI

Generative AI is a type of machine learning model. Generative AI is not a human being. It can’t think for itself or feel emotions. It’s just great at finding patterns.

In the past, AI was used to understand and recommend information. Now, generative AI can also help us create new content, like images, music, and code.

How machine learning models are trained

Machine learning models, including generative AI, learn through a process of observation and pattern matching known as training. For a model to understand what a sneaker is, it’s trained on millions of photos of sneakers. Over time, it recognizes that sneakers are objects that humans wear on their feet with laces, soles, and a logo.

The model can use training to:

  1. Take an input like “Generate an image of sneakers with a goat charm.” 
  2. Connect what it’s learned about sneakers, goats, and charms.
  3. Generate an image, even if it hasn’t seen an image like that before.
How Large Language Models power generative AI

Generative AI and Large Language Models (LLMs) are part of the same technology. Generative AI can be trained on any type of data, but LLMs use words as their main source of training data.

Experiences powered by LLMs, like Gemini and Search Generative Experiences, can predict words that might come next based on your prompt and the text it’s generated so far. They’re given flexibility to pick probable next words that match patterns they get from training. This flexibility lets them generate creative responses.

If you prompt them to fill in the phrase “Harry [blank],” they might predict the next word is “Styles” or “Potter.”

 

How to use generative AI

Important: Google’s experiences powered by generative AI can help you start the creative process. They’re not meant to do all the work for you or be the creator.

Here are 3 ways that you can use generative AI:

  • Brainstorm your creative ideas. For example, get help writing a prequel to your favorite movie.
  • Ask questions that you didn’t think could be answered. Like, “Which came first, the chicken or the egg?”
  • Get an extra boost of help. Ask it to suggest a title for a story you’ve written, or get help identifying the species of an animal or insect in an image.

As you explore, create, and learn new things with Generative AI, it's important to use it responsibly. For details, review our Generative AI Prohibited Use Policy.

AI can & will make mistakes

Because generative AI is experimental and a work in progress, it can and will make mistakes:

  • It may make things up. When generative AI invents an answer, it's called a hallucination. Hallucinations happen because unlike how Google Search gets information from the web, LLMs don't gather information at all. Instead, LLMs predict which words come next based on user inputs. 
    • For example, you might ask, “Who’s going to win women's gymnastics at the 2032 Brisbane Summer Olympics?" and get a response, even though the event hasn't happened yet.
  • It may misunderstand things. Sometimes, generative AI products misinterpret language, which changes the meaning.
    • For example, you may want to learn more about bats, the animal that lives in caves. If you ask for information about bats, it might tell you about the bats used in baseball, cricket, and softball.
Always evaluate responses

Think critically about the responses you get from generative AI tools. Use Google and other resources to check information that’s presented as fact.

If you come across something that isn’t right, report it. Many of our generative AI products have reporting tools. Your feedback helps us refine the models to improve generative AI experiences for everyone.

Use code with caution

Our generative code features are still experimental and you’re responsible for your use of suggested code or coding explanations. Please use discretion and carefully test and review all code for errors, bugs, and vulnerabilities before relying on it. Complying with any license requirements, such as where we provide citations to open source code repositories, is your responsibility. Learn more.

How Google develops AI

To make sure we build tools that make the world better for everyone, we developed a set of AI principles in 2018. These principles describe our goals to develop bold technology that can tackle some of society's biggest challenges in a responsible way.

For example, we use AI to:

  • Support efforts to curb climate change, like reducing stop-and-go traffic to lower vehicle emissions
  • Predict or monitor natural disasters, like forecasting floods in more than 20 countries and tracking the real-time boundaries of wildfires
  • Support healthcare innovations, like making tuberculosis screening more accessible and helping with early detection of breast cancer

Our principles also list areas we won’t pursue with AI, like technologies that cause overall harm or violate international law and human rights.

Check out our full list of AI principles.

How data helps Google develop generative AI in Search

To develop and improve generative AI experiences on Search and the machine learning technologies that power them, Google uses people's interactions with Search and those experiences. This can include interactions like what they search for and feedback they give, like thumbs up or thumbs down. Human review is one of many ways that we evaluate and improve the quality of our results and products responsibly.

When trained reviewers work to improve the quality of Search’s machine learning models, we take a number of precautions to protect users’ privacy:

  • Data that reviewers see and annotate are disconnected from users’ accounts.
  • Automated tools help recognize and remove a broad range of identifying info and sensitive personal information.

Related resources

Was this helpful?

How can we improve it?
Search
Clear search
Close search
Main menu
1682760151993133658
true
Search Help Center
true
true
true
true
true
100334
false
false