Avoiding ChatGPT Sales Mistakes: Tips & Best Practices is a guide for sales and presales professionals to understand some of the risks and pitfalls of using ChatGPT for demo preparation.
Sales and presales professionals are constantly looking for the next big tool to give them a competitive edge at every stage of the sales cycle. This is driven by a need to personalize. Prospects are inundated with hundreds of cold emails and sit through multiple demos of different products, making it hard for sales and presales teams to stand out. Generative AI can be a game-changing technology for these teams.
And from the outset, tools like ChatGPT and Copilot appear to be the perfect assistants. They can draft emails, generate creative content, prep for demos, answer customer questions, and more. But, like any game changing technology, there are risks. Many of the sales mistakes from using ChatGPT and other generative AI tools happen for the simple reason that so much of this is unknown. For example, regulations haven’t caught up to the technology. This can potentially put sales teams on some very thin ice.
With this in mind, we’ve put together this deep dive to help sales and presales teams identify mistakes using ChatGPT before they happen. This way, teams can make informed decisions, avoid missteps, and harness the full potential of generative AI.
What are the top risks and mistakes when using ChatGPT for sales:
In this post, we’ll discuss the following 4 risks and mistakes when using ChatGPT for sales:
- Legal and copyright
- Security and compliance
- Data recency and accuracy
- It’s not you
Legal and copyright sales mistakes
*Disclaimer for this section: I am not a lawyer. I’ve done my homework on this topic, but do not take any of this as legal advice. If you want real legal advice, I’d recommend talking to an actual lawyer. Which isn’t me. Since, as we’ve established, I’m not a lawyer. I’m only discussing US law, so what I’m saying may not be relevant for other countries.
When you use generative AI to create any content, including demo prep and other sales use cases, there’s a thorny legal question: can it be copyrighted? And, if so, who owns said copyright? No one knows – it’s complicated. The best summary I’ve found is: Generative Artificial Intelligence and Copyright Law, published in September 2023 by the Congressional Research Service. Yes, the jokes about Congress and technology write themselves, but the CRS is a non-partisan body overseen by the Library of Congress. It’s legit.
As it stands today, US copyright law only recognizes “original works of authorship…created by a human being.” And while you can argue AI generated content is a conversation, that argument has yet to hold up in court. The latest guidance I’ve found (again, not a lawyer) is if an AI generates the content, you should either rewrite it or make substantial changes to it in order to make it your own.
Why salespeople should care about copyright law
*Disclaimer: I didn’t get a law degree between the last section and this one. All of the “I’m not a lawyer” caveats still apply.
Another issue: ChatGPT was trained on copyrighted content. It’s unclear exactly how much copyrighted content. and authors like George R.R. Martin, Jodi Picoult and John Grisham are suing OpenAI for “systemic theft on a mass scale.” That’s a far bigger issue and out of scope for this post, but it matters because so much of this is up in the air.
My (non lawyer) recommendation, based on OpenAI’s sharing and reproduction policies, is to be transparent. If you used an AI for a meaningful amount of work, the best practice is to say what you used it for and why. This may also serve as an incentive to write your own content (or use your company’s approved content).
Security, privacy, and compliance sales mistakes
This is another thorny issue, but one that’s likely important to sales professionals – especially those like SEs who are more technically oriented in their roles. There are security, privacy, and compliance risks when using ChatGPT to prep for your demo (or any other use case).
First of all, if your company bans or limits the use of ChatGPT (like Apple, Samsung, and Spotify, among others), then follow your corporate policy. For those who don’t have any guidelines around using it, you should still be cautious what information is entered into it.
A key sales mistake make with ChatGPT is putting secure information in a conversation. Bottom line, secure data shouldn’t ever go into ChatGPT, with examples including API keys, proprietary corporate information, PII (personally identifiable information). If you’re in sales, it might be tempting to put confidential product documentation or roadmaps into ChatGPT, or maybe use real data as a template for mock data. Don’t do it.
Why can’t you put secure information in ChatGPT? Thanks for asking – here are a few reasons:
- OpenAI retains data sent via the API to improve its models, which means that a human might review it.
- There’s risks with how the data is sent and stored.
- There may be regulations about what data can be used in your prompt.
If you’re using another AI that’s internal and sitting on top of your own data, and has been vetted by your security team, then by all means use it within the limits of what your security or compliance team says is OK. Otherwise, it’s better to err on the side of caution.
ChatGPT’s data recency and accuracy limitations as you prep for your demo
You may have heard things like “LLMs (large language models) such as ChatGPT require a vast amount of training data.” In this context, training data means all of the data ChatGPT ingests in order to look for patterns. The more text is has, the more it learns about not only the information itself, but about how humans communicate. For example, how to look for context to separate sarcasm from irony from sentences with no subtext (like identifying how most of the things in Alanis Morissette’s Ironic aren’t actually ironic – just unfortunate).
Because so much data gets ingested, it’s very difficult for broad LLMs to stay current (an AI that, for example, analyzes your CRM data is much more narrow and therefore can be current). ChatGPT’s latest training model is, as of today, current through January 2022. ChatGPT does have access to the broader internet with its new “Browse with Bing” feature, but there are several limitations: as of today it can’t access information behind a paywall or information in images (yes, it can analyze images in a chat, but can’t read them on a webpage).
One more consideration here. If you’re in a regulated industry, the response may not reflect current regulations. A key ChatGPT sales mistake would be to take it at its word when it comes to compliance and regulatory guidance. Always check to make sure the answers are compliant with regulations (or ask a laywer/your company/your company’s lawyer).
Data input sales mistakes
Most consumer-facing AIs have data limits. ChatGPT has a limit of 4,096 tokens. The concept of a token can be a little confusing. Basically, a token is a measure of how many words and characters is in a prompt. All of these count as 1 token:
- A word*
- A space
- A special character
- A number (commas also count, so 1000 is 1 token but 1,000 is 2 tokens)
*This is specific to English. Some languages might require multiple tokens. For example, languages like German with long compound words. Or languages with double-byte characters (Japanese or Mandarin). For example, the question “what is a token?” has: 5 words, 4 spaces, and a question mark – meaning 10 tokens total.
If you’re using ChatGPT to prep for your demo, keep this limit in mind. If you summarize (public facing) documentation, you could easily go over the limit. A workaround is to input the data in multiple prompts. I’ve found that, even when telling ChatGPT that’s what I’m doing, it doesn’t always work. The risk is that ChatGPT will miss an important piece of information in an earlier prompt, which could impact your results.
It isn’t you
ChatGPT is a good writer and even better editor (it recently reminded me how to properly use a gerund, which I haven’t thought about much since 8th grade), but it’s not perfect. Even with writing samples, it can’t replicate your communication style perfectly. It may use words or phrases that you don’t use, or references that don’t really make sense. You don’t want to be in front of an audience and not know what to say because the AI wrote your talk track.
In my experience, ChatGPT tends to give the best results when specific about tone. You can also combine tone with Flesch-Kincaid (grade level), so your prompt may look something like “I want a friendly yet professional tone at the 6th grade level.”
One major risk to using ChatGPT to prepare for your demo – sometimes it makes mistakes. I once had it analyze financial reports, but because I didn’t copy the “in thousands,” it interpreted billions as millions. I don’t think audiences would be very forgiving of the “it wasn’t me, it was the AI” excuse. Playing through the accidentals won’t get you out of trouble.
AIs like ChatGPT can be great for ideas, especially if you’re having a bit of writers block. But there’s a big difference between “the AI helped” and “the AI wrote it for me.”
ChatGPT can help you scale as you prep for your demo
When used properly, generative AI applications like ChatGPT can be incredibly helpful when prepping for your demo. But make sure to avoid the ChatGPT sales mistakes discussed in this post. Your job, and perhaps your company’s reputation (and your company not paying fines) depend on it.
Other ChatGPT sales mistakes you know of? Drop them in the chat! And if you want to talk about how best to use ChatGPt for your sales team, drop us a line.