Leveraging Data Insights From Risk Management to Marketing Optimization.

Thanks to everyone who joined our meetup co-hosted with JetBrains last Thursday! It was wonderful to reconnect after our September conference. We hope you enjoyed the talks and had some insightful conversations.

Below is a summary of the talks & key takeaways:

Jesse Koreman and Kally Chungโ€™s Talk: ๐Œ๐๐‹๐— - ๐Œ๐š๐œ๐ก๐ข๐ง๐ž ๐‹๐ž๐š๐ซ๐ง๐ข๐ง๐  ๐ƒ๐ซ๐ข๐ฏ๐ž๐ง ๐‘๐ข๐ฌ๐ค ๐Œ๐š๐ง๐š๐ ๐ž๐ฆ๐ž๐ง๐ญ focused on three core issues with the current system:

โœฆ Timing Mismatch: Inaccurate timing in risk assessments.
โœฆ Inaccurate Assumptions: Flawed risk assumptions raised concerns about insufficient deposit holdings.
โœฆ Merchant Dissatisfaction: Inaccurate assessments strained merchant relationships, especially regarding deposit requirements.

Using a data-driven approach with survival analysis and nonparametric curve fitting, Jessie and Kallyโ€™s team achieved better accuracy, reduced deposit requirements, and improved merchant satisfaction, demonstrating MPLXโ€™s strong impact.

Bas Stinenbosch's talk ๐Ž๐ฉ๐ญ๐ข๐ฆ๐ข๐ณ๐ž ๐ฒ๐จ๐ฎ๐ซ ๐ฆ๐š๐ซ๐ค๐ž๐ญ๐ข๐ง๐  ๐ž๐ฑ๐ฉ๐ž๐ง๐ฌ๐ž๐ฌ: ๐€๐ง ๐จ๐ฏ๐ž๐ซ๐ฏ๐ข๐ž๐ฐ ๐จ๐Ÿ ๐๐ข๐Ÿ๐Ÿ๐ž๐ซ๐ž๐ง๐ญ ๐ฆ๐ž๐ญ๐ก๐จ๐๐ฌ discussed the evolution of marketing mix models, from linear regression to Bayesian models, to optimize spend across channels. Challenges include limited weekly data points and privacy regulations, complicating multi-channel attribution. Bayesian modeling, by incorporating prior knowledge, offers actionable insights with limited data. Bas recommends combining marketing mix modeling, multi-touch attribution, and A/B testing for optimal marketing strategies.

Jodie Burchell, PhD's talk ๐‹๐ข๐ž๐ฌ, ๐ƒ๐š๐ฆ๐ง๐ž๐ ๐‹๐ข๐ž๐ฌ, ๐š๐ง๐ ๐‹๐š๐ซ๐ ๐ž ๐‹๐š๐ง๐ ๐ฎ๐š๐ ๐ž ๐Œ๐จ๐๐ž๐ฅ๐ฌ explored why hallucinations in LLMs happen and ways to mitigate them. LLMs, powered by vast datasets and transformer architecture, can produce impressive but sometimes misleading content. Hallucinations fall into two categories: faithfulness errors, where the model strays from the source, and factual errors, where it confidently shares inaccurate "knowledge." Solutions include prompt-tuning, self-checking, and grounding responses with live data. While hallucinations may persist, understanding and reducing their risks ensures LLMs are applied responsibly.


๐๐จ๐ง๐ฎ๐ฌ ๐Ÿ๐ซ๐จ๐ฆ ๐‰๐จ๐๐ข๐ž
For those in the small group chat after the talk, you already have these ๐Ÿ˜‰ ! Sharing with the larger community:

โœฆ Another talk by Jodie: Mirror, Mirror: LLMs and the Illusion of Humanity Watch: https://lnkd.in/eZzVFC5y

โœฆ On the measure of intelligence
https://lnkd.in/fcWYgwr

โœฆ Levels of AGI for Operationalizing Progress on the Path to AGI
https://lnkd.in/dBNwvCjZ

Next
Next

Real-life ML problem solving.