Skip to Main Content



Everyday AI: Best Practices for Using AI

What You Need to Know About AI

Artificial intelligence (AI) is increasingly woven into how we live, learn, and work, often without us even realizing it. That’s why it’s important for everyone to understand how AI functions and how it’s transforming our everyday lives.

This guide is your introduction to AI — what it is, how to use it appropriately and responsibly, and how to protect yourself from AI-generated deceptive content. You'll also find resources to explore topics more deeply if you're curious to learn more.

 

Artificial intelligence (AI) is a broad term used to describe any technology that mimics aspects of human intelligence, such as machines or software that perform tasks like problem-solving, predicting, learning, and acting autonomously. Web search engines, face ID on your phones, spam filters, streaming service recommendations from platforms like Netflix and YouTube, voice assistants like Amazon Alexi or Apple Siri, and apps like Google Maps, Expedia, and Uber all fall under this AI umbrella.

Because of the wide range of technologies under this umbrella, lumping them together leads to confusion, misunderstandings, and misdirection. In fact, computer and software engineers cannot even agree upon one definition of AI, as you can see in Table 1.

How Does Generative AI Work?

On a very basic level, when you type a prompt into a generative AI like Copilot or ChatGPT, the AI breaks your text into small units called tokens. Based on the data and models it was trained on, it creates a response token by token based on the probabilities of what usually surrounds those tokens. In essence, generative AI tools analyze patterns in text, such as the order, frequency, and relationships between words, to mimic human speech. 

As convincing as this mimicry can be, this means that, unlike a human who understands the actual semantics of a sentence (in other words, what the sentence actually means), generative AI tools don't understand language the same way humans do. While many AI tools use words like "summarize" and "analyze" to describe the AI's output, these tools aren't analyzing or summarizing in the same way you would. The best way to describe what AI is doing is giving you its best guess of what it thinks an answer to your query should look like

 

Common AI Terminology

Large Language Models (LLM) – Self-contained computer programs that mimic the pattern and structure of human language by statistically analyzing vast quantities of language data. The process of analyzing such vast quantities of data is referred to as machine learning. Because LLMs are trained on fixed data sets, they must be updated periodically in order to incorporate information from recent events. Examples include GPT-5o, Claude, Gemini, Llama, and Mistral.

LLM Wrapper – Most people interact with LLMs through software wrappers like ChatGPT, which serve as intermediaries. These wrappers augment an LLM’s fixed information by adding external or user-provided information to the prompt in order to generate more current and relevant responses.

Prompt – Sentences given to an LLM to help it generate new content by predicting the words associated with those sentences. These prompts may be processed by the LLM Wrapper first before going to the LLM.

Tokens – LLMs don’t read words like we do. Instead, LLMs process language by converting text into tokens — small units of word parts, punctuation, or spaces — before analyzing the patterns between them. Unlike humans, who understand meaning, LLMs generate responses based on the statistical relationships between tokens.

Table 1. Definitions of AI

How different sources define AI
Author(s) Definition

Maggiori, E., The AI pocket book.

AI is a research field that tries to create computer programs to perform tasks in a way similar to humans.
Frana, P.L., & Klein, M.J. (Eds.). Encyclopedia of artificial intelligence: The past, present, and future of AI. "Artificial intelligence describes real or imagined efforts to simulate cognition and creativity." (p. xi).

McCarthy, J., Minsky, M.L., Rochester, N., & Shannon, C.E., A proposal for the Dartmouth Summer Research Project on Artificial Intelligence. 

(McCarthy is credited with coining the term, "Artificial Intelligence."

"...how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." (p. 13).
Nilsson, N.J., The quest for artificial intelligence: A history of ideas and achievements. "activity devoted to making machines intelligent...intelligence is that quality that enables an entity to function appropriately and with foresight in its environment."
Gil de Zúñiga, H., Goyanes, M., & Durotoye, T., A scholarly definition of Artificial Intelligence (AI): Advancing AI as a conceptual framework in communication research. "the tangible real-world capacity of non-human machines or artificial entities to perform, task solve, communicate, interact, and act logically as it occurs with biological humans." (p. 318)

 

Verify, Verify, Verify

Why Check Generative AI Outputs?

The main draw of AI tools is saving time. However, whenever using generative AI, you should verify every claim the AI makes because of:

  • Hallucinations: AI cannot discern between what is real and what is fake. Hallucinations are when generative AI gives you very convincing, yet totally false information. Rather than true comprehension or critical thinking, generative AI mimics, producing responses based on patterns and probabilities that it learned from analyzing training data. This mimicry can produce hallucinations that sound so convincing that they may be hard to spot if you are not already knowledgeable about a topic. For example, when Google's AI, Bard (now called Gemini), debuted, it claimed that the James Webb Space Telescope (JWST) took the first pictures of an exoplanet. While this seemed plausible to most people, astronomy enthusiasts were quick to spot the falsehood. Even when generative AI tools pull information from external sources outside of their training data (known as retrieval-augmented generation, or RAG), they may misinterpret nuance, humor, or context, leading to inaccurate or misleading outputs (Williams, 2024).
  • Bias: The vast amounts of data used to train AI also contain human bias in the form of historical inequities, stereotypes, and the underrepresentation of certain groups of people. AI trained on this data may reproduce or amplify these biases in its outputs. Microsoft launched an AI chatbot named "Tay" in March 2016. It was designed to learn from user interactions and mimic the speech patterns of a teenage girl. Within hours of launching, Tay began generating racist, xenophobic tweets in response to user interactions, prompting Microsoft to swiftly suspend the account after just 16 hours. 
  • Plagiarism: Generated outputs from AI may copy from the original material, which means that if you do not edit and add your own thoughts to the outputs, you could be plagiarizing. 
  • Reputation: Professional organizations such as the MLAAPA, and Cambridge University Press all agree that you must give credit to AI (through citing and disclosing the use of the tool). However, all of these organizations also agree that AI can't be considered an author (the Cambridge University Press explains this because AI cannot be held accountable for its output).

To protect yourself (and your reputation), whenever using AI, YOU are responsible for the output you use. You should always double-check its claims, heavily edit AI output, and disclose your AI use to protect your reputation. See ways to disclose your AI usage in the box below.

References

American Psychological Association. (n.d.). APA Publishing Policies. https://www.apa.org. Retrieved August 19, 2025, from https://www.apa.org/pubs/journals/resources/publishing-policies

Cambridge University Press. (n.d.). Publishing ethics authorship and contributorship journals. Cambridge Core. Retrieved August 19, 2025, from https://www.cambridge.org/core/services/publishing-ethics/authorship-and-contributorship-journals

Modern Language Association. (n.d.). Submitting Manuscripts to PMLA. Retrieved August 19, 2025, from https://www.mla.org/Publications/Journals/PMLA/Submitting-Manuscripts-to-PMLA

Williams, R. (2024, May 31). Why Google’s AI Overviews gets things wrong. MIT Technology Review. https://www.technologyreview.com/2024/05/31/1093019/why-are-googles-ai-overviews-results-so-bad/