Demandbase Demandbase
Img of Celeste Fralick
Publish date: October 13, 2021
Share to:

Models of Artificial Intelligence

Subscribe & Listen

Shownotes

This episode of Sunny Side Up is all about demystifying data science with Celeste Fralick, Chief Data Scientist at McAfee, as our guide. She outlines some of the basic questions that companies (and particularly Sales and Marketing teams) should be asking their data scientists. The models for Artificial Intelligence (AI) that go out into the field are critical to business success – and constantly at risk of decay and vulnerable to bias. If you want your company’s investment in big data to pay off over time, models out in the field need to be monitored. Celeste provides a detailed breakdown of what that monitoring should look like and what elements those focused on the go-to-market need to ask their data science team about as well as a track for optimal results.


About the Guest

Celeste Fralick, Sr. Principal Engineer/Chief Data Scientist, is responsible for innovating advanced analytics and analytic processes at McAfee. She was named to Forbes’ “Top 50 Technical Women in America” and has applied AI to 10 different markets for over 40 years. She holds a Ph.D. in Biomedical Engineering from Arizona State University, with numerous patents and papers.

Connect with Celeste Fralick

Key takeaways

  • A bit about the history of artificial intelligence (AI). What exactly is it and where are its roots? It dates back to predictive analytics in 1940 and evolved over the ensuing decades from the province of statisticians to the realm of data scientists as big data emerged between 1998-2005.
  • Celeste explains elements of the Pyramid of Complexity & Intelligence in Analytics: Architecture and data management through statistics, which extend to deep learning, machine learning, and AI.
  • If you’re creating AI within your business or embedding it in a product, it must all start with data management so that the models within machine learning, deep learning, and AI are repeatable and reproducible.
  • Celeste highlights two areas where Sales and Marketing teams should be proactive:
  • A recent survey found that 51% of companies do not monitor models after deployment – which Celeste says is an unacceptable standard.
  • It’s critical for companies to ask relevant questions about models created by data scientists who may not have important sales and marketing backgrounds/perspectives.
  • Monitoring models in the field must be routine, consistent, and thorough.
  • Recent survey results also indicate that 50% of companies do not evaluate data and models for bias. This can have significant real-world consequences and create false negatives or positives in critical areas such as credit reports or medical outcomes.
  • The quality of predictive outcomes for new opportunities and customers will be compromised if models are built on biased or incomplete sales and marketing data.
  • Model Decay is a final element that Sales and Marketing teams need to track.
  • At the end of the day, it’s critical that those working on the commercial side speak up and ask questions of their data scientists. Without ongoing collaboration and necessary adaptations, the marketplace results can only be as good as the AI (always at risk of decay) on which they are based.

Quote

“Just start probing (your data scientists). My mantra is, ‘Just ask why.’ Ask why five times and pretty soon you’ll get answers. If not, you better go up the chain because something is wrong.”

Highlights from the episode

Celeste Explains the Interrelationship Between Various Facets of AI.

Deep learning is part of machine learning and both tend to fall under the umbrella known as AI (Artificial Intelligence). There are differences in complexity and intelligence, as well as how important architecture and data management are.

What Sales & Marketing Should be Asking About Models in the Field.

Monitoring models is not just a matter of collecting information. Celeste advises diving deeper: What kind of information are you feeding back into the system? How often are you re-training your model? Is your monitoring real-time or is it batch? Companies need to watch:

  • Concept Drift: Data labels change from good to bad, yes to know, malicious to benign.
  • Data Decay: Changes in data volumes and features over time.
  • Cyber: Ransomware and other threats due to trojans, phishing, and spam.
  • Anomalies and outliers.
What Sales & Marketing Needs to be Asking Data Scientists.

How do you evaluate for bias in your data and models? If marketing and sales can’t get an answer, that’s a red flag. They may say they use Explainability or XAI (Explainable AI), and that’s good – but not enough. These tools can be utilized to understand the direction and strength of the feature vectors that went into causing your model to give the answer it did. Correcting imbalanced data sets is crucial and can be achieved with anti- or de-bias algorithms. Proactive accountability is also important within the C-suite for a healthy AI ecosystem.

Big Takeaway: Don’t Be Afraid to Follow Up with Data Scientists.

As data science takes on an ever more powerful role in the marketplace, it’s important for Sales and Marketing teams not to shy away from asking data scientists about what kind of data they are training on, what kind of performance they are getting, what kind of ROD (return on data) is realistic and attainable?


Sunny Side Up

Sunny Side Up
B2B podcast for, Smarter GTM™