Leveraging Data Insights From Risk Management to Marketing Optimization.
Thanks to everyone who joined our meetup co-hosted with JetBrains last Thursday! It was wonderful to reconnect after our September conference. We hope you enjoyed the talks and had some insightful conversations.
Below is a summary of the talks & key takeaways:
Jesse Koreman and Kally Chungโs Talk: ๐๐๐๐ - ๐๐๐๐ก๐ข๐ง๐ ๐๐๐๐ซ๐ง๐ข๐ง๐ ๐๐ซ๐ข๐ฏ๐๐ง ๐๐ข๐ฌ๐ค ๐๐๐ง๐๐ ๐๐ฆ๐๐ง๐ญ focused on three core issues with the current system:
โฆ Timing Mismatch: Inaccurate timing in risk assessments.
โฆ Inaccurate Assumptions: Flawed risk assumptions raised concerns about insufficient deposit holdings.
โฆ Merchant Dissatisfaction: Inaccurate assessments strained merchant relationships, especially regarding deposit requirements.
Using a data-driven approach with survival analysis and nonparametric curve fitting, Jessie and Kallyโs team achieved better accuracy, reduced deposit requirements, and improved merchant satisfaction, demonstrating MPLXโs strong impact.
Bas Stinenbosch's talk ๐๐ฉ๐ญ๐ข๐ฆ๐ข๐ณ๐ ๐ฒ๐จ๐ฎ๐ซ ๐ฆ๐๐ซ๐ค๐๐ญ๐ข๐ง๐ ๐๐ฑ๐ฉ๐๐ง๐ฌ๐๐ฌ: ๐๐ง ๐จ๐ฏ๐๐ซ๐ฏ๐ข๐๐ฐ ๐จ๐ ๐๐ข๐๐๐๐ซ๐๐ง๐ญ ๐ฆ๐๐ญ๐ก๐จ๐๐ฌ discussed the evolution of marketing mix models, from linear regression to Bayesian models, to optimize spend across channels. Challenges include limited weekly data points and privacy regulations, complicating multi-channel attribution. Bayesian modeling, by incorporating prior knowledge, offers actionable insights with limited data. Bas recommends combining marketing mix modeling, multi-touch attribution, and A/B testing for optimal marketing strategies.
Jodie Burchell, PhD's talk ๐๐ข๐๐ฌ, ๐๐๐ฆ๐ง๐๐ ๐๐ข๐๐ฌ, ๐๐ง๐ ๐๐๐ซ๐ ๐ ๐๐๐ง๐ ๐ฎ๐๐ ๐ ๐๐จ๐๐๐ฅ๐ฌ explored why hallucinations in LLMs happen and ways to mitigate them. LLMs, powered by vast datasets and transformer architecture, can produce impressive but sometimes misleading content. Hallucinations fall into two categories: faithfulness errors, where the model strays from the source, and factual errors, where it confidently shares inaccurate "knowledge." Solutions include prompt-tuning, self-checking, and grounding responses with live data. While hallucinations may persist, understanding and reducing their risks ensures LLMs are applied responsibly.
๐๐จ๐ง๐ฎ๐ฌ ๐๐ซ๐จ๐ฆ ๐๐จ๐๐ข๐
For those in the small group chat after the talk, you already have these ๐ ! Sharing with the larger community:
โฆ Another talk by Jodie: Mirror, Mirror: LLMs and the Illusion of Humanity Watch: https://lnkd.in/eZzVFC5y
โฆ On the measure of intelligence
https://lnkd.in/fcWYgwr
โฆ Levels of AGI for Operationalizing Progress on the Path to AGI
https://lnkd.in/dBNwvCjZ