Solving the First Conversation Problem in AI

Samuel Zaruba Smith, Ph.D.(c)
Author: Samuel Zaruba Smith, Ph.D.(c)
Date Published: 16 May 2024

Everyone knows that AI is hot right now.

Every executive, official or researcher I meet is trying to do something with AI. Remarkably, it’s still hot over a year after ChatGPT’s early demo made major headlines at the end of 2022 and Sam Altman became Silicon Valley’s newest pop-star CEO.

One of the big use case trends that has emerged over the past year is to solve what AI researchers call the First Conversation Problem, in which organizations collect partner or customer data to start an intake pipeline, such as a pipeline for hiring or sales.

The First Conversation Problem, a term having some adoption in the engineering and research community, can be applied broadly. In the private sector it can be called lead generation or business development, but it can also be more formally interpreted as any interaction where many actions are needed for that first conversation with another human (non-AI).

For firms finding customers, it’s about reaching meaningful interactions with potential customers at scale. When it’s hiring or recruiting, you need to filter through as many candidates as possible with meaningful interactions back. In cybersecurity, it’s making sure that individual level accesses are protected at scale robustly. In digital content, people send out tons of initial messages for first conversations with people they want to chat with. Compliance and customer service also have similarities. The problem is the first conversation with someone outside the main organization network is a huge time sink.

In response, companies have been using generative AI to save human hours and help automate that step. For example, airlines have successfully used generative AI for personalized outreach and follow-ups with customers to re-engage them.

Other successful examples include tools purchased by F500 sales departments to quickly adopt generative AI for well documented improvements for cold calling and cold outreach, such as http://www.Outreach.io and http://www.Copilotai.com/.

Recruiting/hiring has also seen recent entrants solving this same First Conversation Problem, notably http://Cheeki.io and http://www.TeamTailor.com.

One of the reasons you’re seeing these single-use AI generative firms sprouting up is because generalizable AI systems such as the Microsoft supported ChatGPT or Google’s Gemini are financially expensive and complex to run. OpenAI and other AI firms struggle to afford their massive compute bills.

Hence, the old Unix philosophy still has a place in this AI crazy world. Early adopter use cases are focusing on single-purpose AI agents that solve a single type of First Conversation Problem. Doone thing and do it well.

So, how can you start implementing these single use AIs in your business to solve your organization’s First Conversation Problem?

There are two paths before you: homebrew your own agents with your own data, or find a commercial plugin in a platform you are already paying for.

Homebrewing your own AI agent is only cost-effective if you’re a large organization and have the resources and expertise to put toward hand-crafting a specially designed IT solution. We will see homegrown adoption in public companies, regulated industries or government related entities.

In the airline industry, for example, there is a high likelihood that homegrown AI will be a real benefit for their bottom line. In smaller, faster-moving businesses, such as general IT support, marketing and sales, commercial products and plugins are the best way to add value.

For those readers thinking about AI adoption, it is important to plan out auditing your final AI solution even when adopting any third-party AI plugin. Red teaming in most customer-facing, or security-related applications of AI, is now standard practice, in addition to classical software oversight for QA, legal, data privacy, governance, risk and compliance.

This means keeping records of all your generative AI content for liability purposes and initially having a manual review process for all the AIs doing work. After the system is up and running, the manual auditing will gradually ease off as the AI performs better over time based on fresh, well-curated data, and thereby needs less supervision – not unlike training a human for a new job.

Right now, there is a Cambrian explosion of companies offering generative and other AI services. This will only continue over the next few years as engineers become better at solving First Conversation Problems in a variety of different domains by using AI. It’s an exciting time to be in the field.

Recommended further reading: Prompt engineering and the first conversation: http://www.nngroup.com/articles/ai-prompt-structure/

Author’s disclaimer: I am an unpaid advisor to the company listed above http://Cheeki.ioI am a former Microsoft Employee and have worked with some of the AI products above including those from Microsoft.

About the author: Samuel Zaruba is an incoming faculty member in the Von Allmen School of Accountancy at the University of Kentucky’s Gatton School of Business and their Center for Data Analytics. His PhD research work is for the University of Nevada with their Center for Cybersecurity & Government Policy. He has spent more than a decade working with and as a consultant for Microsoft, PricewaterhouseCoopers (PwC), Bank of America, AT&T, Amazon Web Services, public healthcare, energy utilities, cybersecurity organizations, and government organizations including as a researcher for the US National Science Foundation (NSF). He has published in the ISACA Journal multiple times where his research focuses on IT Risk Management. He has taught university coursework and continuing education credits for professionals. 

Additional resources