





















































The future of secure AI-driven development is here, and DevSecCon25 is leading the conversation!
Join us on October 22, 2025 for this one-day event to hear from leading experts in AI and security from Qodo, Ragie.ai, Casco, Arcade.dev, and more! The full agenda includes:
Don't miss this opportunity to gain the knowledge and strategies needed to embrace the AI revolution securely.
Sponsored
Welcome to BIPro Expert Insights #115
We’re excited to bring you another packed edition full of deep dives, practical tutorials, and cutting-edge updates in Data Management & BI. This week, we’re thrilled to welcome Nishant Arora, Solutions Architect at AWS, to our newsletter portfolio, who will be sharing deep-dive insights on how AI is reshaping industries.Nishant takes us into the world oftrustworthy AI in automotive and manufacturing, where safety, explainability, and regulatory readiness are non-negotiable for ML systems driving the future of mobility.
Alongside this deep dive, here are the key highlights making waves in BI and data this week:
Power BI September Updates➖Copilot default-on, smarter DAX, new visuals, and Teams integration.
Fabric September Updates➖Governance tools, Python notebooks GA, Fabric CLI open-sourced, and more.
Fabric Data Factory➖Stronger security and compliance for enterprise-scale data integration.
Statsig in Fabric➖Native experimentation analytics for A/B testing and feature insights.
BigQuery Upgrades➖Conversational insights andTimesFM-powered forecasting without extra ML setup.
Gemini CLI for PostgreSQL➖Use plain English for queries, schema management, and extensions.
Case Study: SSENSE➖Migrated 600 dashboards toQuickSight, cutting costs by two-thirds.
Case Study: PayNet➖Modernized BI with near real-time analytics and natural language queries.
From safer AI in cars to smarter analytics in the enterprise, this week’s stories all point toward one theme:building trust while pushing innovation forward.
Let’sdive in.
Cheers,
Merlyn Shelley
Growth Lead, Packt
Artificial intelligence (AI) and machine learning (ML) are redefining the automotive industry. Cars are no longer just mechanical systems; they are intelligent, adaptive, and connected machines. Advanced driver-assistance systems (ADAS), predictive maintenance tools, and self-driving algorithms promise safer and more efficient transportation. Yet, the integration of ML also raises pressing concerns:can we guarantee these systems behave safely, explain their choices, andcomply withstrict automotive standards?
Unlike recommendation systems or digital assistants, automotive MLoperatesinlife-critical environments. A single wrong decisionidentifyinga pedestrian, miscalculating braking distance, orfailing to detectsensor faults could have irreversible consequences. This is whytrustworthinessis not just a desirable property, but apreconditionfor adoption at scale.
Safety as the Core of Trust
In safety-critical applications, evaluating ML performance goes beyond accuracy. What matters is whether the system preserves safe operation under all circumstances. A useful framing is:
P (Safe|Model Decision)
This probability expresses the likelihood that, given a model’s action, the outcome is safe. Accuracy alone does not guarantee that the rare but dangerous cases are adequately addressed.
Equally important is theability to measure uncertainty. For example, an object recognition system in an autonomous car must know when it is unsure if a shadow is a pedestrian or just road texture. This can be modeled as predictive variance:
Var(y∣x,θ)
whereyis the outcome for inputxunder model parameters θ. Systems that quantify uncertainty allow safer fallback strategies such as driver takeover or conservative control.
Safety can also be built directly into model training. A combinedobjectivefunction might looklike:
L=Laccuracy+λ⋅Lsafety
whereLaccuracyreflects predictive performance andLsafetypenalizes unsafe decisions, weighted by factorλ. In this way, the model learns not only to be correct, but also to respect predefined safety boundaries.
Finally,confidence calibrationis vital. Regulators often require that predicted probabilities align with actual outcomes, ensuring that an ML model’s confidence is trustworthy:
E[∣y^−y∣]≤ε
whereεrepresentsthe maximum allowable deviation. Poor calibration can create dangerous overconfidence even when classification accuracy is high.
Explainability: Building Human Trust
Even a safe system will not be widely adopted if engineers, regulators, and customers cannot understand how it works. This is whereexplainable ML (XAI)becomes indispensable.
Some prominent methods include:
>> Feature attribution tools(e.g., SHAP, LIME) that show which sensor inputs or environmental factors most influenced a model’s decision.
>> Surrogate models, such as simple decision trees approximating a deep neural network, whichmake the decision boundary more interpretable.
>> Rule-based explanations, translating complex outputs into understandable logic:“if road is slippery and braking distance exceeds threshold, reduce speed.”
Such techniques allow developers to debug failures, give regulators evidence for certification, and help buildpublic confidencein ML-driven cars.
Regulation and Safety Standards
Traditional automotive safety is governed by standards likeISO 26262, which defines processes and Automotive Safety Integrity Levels (ASILs). These were designed for deterministic, rule-based software. ML, by contrast, is probabilistic and data-driven, creating new challenges for compliance.
To bridge this gap, companies are adoptingverification and validation (V&V) frameworkstailored for ML. These include large-scale simulation testing, corner-case scenario generation, and monitoring model drift once systems are deployed. The aim is not just to test for accuracy, but to produceaudittrailsand evidence of robustness that regulators can certify.
Looking ahead, standardswilllikely evolveto explicitly account for ML, requiring documentation of uncertainty estimates, explainability reports, and continuous monitoring logs.
Emerging Pathways to Safer ML
Several technological approaches show promise in making automotive ML more trustworthy:
Cloud-NativeMLOps
Cloud platforms now allow continuous retraining and redeployment of ML models asconditions shift (e.g., new road layouts or changing weather patterns). With automated testing pipelines, everynew versioncan be checked against safety and compliance metrics before deployment.
Digital Twins and Safety-Constrained Reinforcement Learning
Digital replicas of cars and environments enable billions of simulated test miles without real-world risk. Reinforcement learning agents can be trained with explicit safety constraints, ensuring that unsafe behaviors are never reinforced.
Self-Monitoring Agentic AI
Future systems may integrate agentic AI that audits its own behavior in real-time. Such systems could flag potential regulatory violations, halt unsafe actions, or escalate control to human operators. Thisrepresentsa step toward vehicles thatself-enforce compliancerather than relying solely on external oversight.
Conclusion: Toward a Trustworthy Future
AI in automotive promises safer roads, lower maintenance costs, and smarter mobility. But none of this progress matters unless these systems areprovablysafe, transparent, and regulationready.
Automakers must embed safetyobjectivesdirectly into training and evaluation. Regulators must expand standards like ISO 26262 to incorporate probabilistic models. Cloud providers and technology partners must deliver the infrastructure for continuous monitoring and compliance assurance.
The next era of mobility will not be defined merely by how advanced ML models become, but by how muchtrustsociety places in them. Only when AI systems are demonstrably safe, explainable, and aligned with regulatory frameworks will we see widespread adoption of truly autonomous and intelligent vehicles.
References
➖ ISO 26262:2018. Road Vehicles – Functional Safety.International Organization for Standardization.
➖ Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete Problems in AI Safety.arXivpreprint arXiv:1606.06565.
➖ Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning.arXivpreprint arXiv:1702.08608.
➖ Kendall, A., & Gal, Y. (2017). What uncertainties do we need in Bayesian deep learning for computer vision?Advances in Neural Information Processing Systems (NeurIPS).
➖ Shapley, L. S. (1953). A value forn-person games.Contributions to the Theory of Games, 2(28), 307–317. (Basis for SHAP explainability methods).
➖ National Highway Traffic Safety Administration (NHTSA). (2020). Automated Vehicles 4.0: Preparing for the Future of Transportation.U.S. Department of Transportation.
⬛Power BI September 2025 Feature Summary:ThePower BI September 2025 Feature Summarycoincides withFabConViennaand delivers major updates. Highlights include Copilot default-on with AI enhancements, improved DAX time intelligence, live Direct Lake semantic model editing, TMDL view GA, advanced visualization options, and mobile NFC support. Users can alsobenefitfrom Fabric certifications, championships, and enhanced report-sharing in Teams.
⬛Power BI in Teams – Content Shared in Teams Chats Now Opens a Dedicated Separate Window Within Teams:Power BI now makes collaboration inTeamssmoother. Reports shared in chats or channels open in aseparate window, keeping the original conversation in a collapsible side panel. This lets you explore data while continuing the chat, switch between Teams apps without disruption, and avoid multiple pop-ups, streamlining multitasking andmaintainingworkflow flow.
⬛Fabric September 2025 Feature Summary:TheFabric September 2025 Feature Summaryhighlights new certifications,FabConVienna’s Power BI DataViz World Championships, and major platform upgrades. Key releases include theGovern TabandDomains Public APIs(GA), expandedMicrosoft Purview policies, enhancedDataflow Gen2 Copilot features,Python notebooks GA,Fabric CLI open-sourced, and newmirroring/connectors. These updates strengthen governance, extensibility, and developer productivity across Fabric.
⬛Mission-Critical Data Integration: What’s New in Fabric Data Factory?This is aboutnew mission-critical security and compliance features in Microsoft Fabric Data Factory. The update highlights how Fabric now supports enterprise-grade data integration by strengtheningauthentication, isolation, secret management, gateway controls, and automation. The goal is to help organizations handle sensitive, large-scale workloads across hybrid and cloud environments withgreater security, resilience, and governance.
⬛Statsig Experimentation Analytics (Preview):This is aboutStatsig’sExperimentation Analytics becoming available in Microsoft Fabric. It introduces a new workload in theFabric Workload Hubthat lets product teams run experiments, define custom metrics, analyze user behavior, and measure feature impact, all directly on data inOneLake. The integrationeliminatesdata movement, streamlines A/B testing, and enables faster, data-driven product innovation within the Fabric ecosystem.
⬛AI-based forecasting and analytics in BigQuery via MCP and ADK:Google has expanded AI agent capabilities inBigQuerywith two new tools:ask_data_insightsfor conversational analytics andBigQueryForecastfor time-series predictions. Agents can now answer complex questions in plain English, provide transparent reasoning, and generate forecasts using theTimesFMmodel, all without moving data or setting up ML infrastructure, streamlining enterprise-scale data analysis and prediction workflows.
⬛Gemini CLI for PostgreSQL in action:TheGemini CLI extension for PostgreSQLsimplifies database tasks by letting developers useplain English commandsinstead of switching between tools. It canidentifyand install extensions likepg_trgmfor fuzzy search, recommend performance optimizations, and generate queries automatically. Beyond search, it supports schema exploration, lifecycle management, and code generation, turning the CLI into a truedatabase assistant.
⬛How SSENSE modernized their analytics platform with Amazon QuickSight?SSENSE, a global luxury e-commerce platform, migratednearly 600dashboards from its legacy BI system toAmazonQuickSightin just 8 months. The move reduced analytics costs by two-thirds, cut dashboard maintenance by 80%, and improved Athena data availability by 95%. With seamless AWS integration, AI-driven features, and broad user adoption, SSENSE achieved scalable, efficient self-service analytics.
⬛How PayNet enhanced payment analytics with Amazon QuickSight?Payments Network Malaysia (PayNet), the country’s national payments backbone, modernized its BI by migrating toAmazonQuickSight. LeveragingAthena, S3, and AWS Glue,PayNetenabled near real-time transaction analytics, cross-border payment insights, and secure SSO via IAM Identity Center. WithAmazon Q, users now query data in natural language for defect monitoring, benchmarking, and operational visibility, boosting efficiency, security, and collaboration.
See you next time!