💬 Monday is your chance to talk directly to the LangChain team. We’re hosting a Community Jam Session on Monday to get your feedback and share more on LangChain 1.0 and 1.1. We'll be joined by Sydney Runkle, Chester Curme, and Hunter Lovell. We want to hear your real-world wins and friction points! On the agenda: 1. Real-world 1.0 feedback: Tell us how it’s performing in your production builds. 2. 1.1 Feature Review: What are you building with the new updates? 3. The Future: Updates on the roadmap and the recently published langchain-mcp-adapters. Join, meet the OSS team, meet other community members, and help build the future of the tools you need! 👉 RSVP to get event reminders: https://luma.com/085nyxmj
Excited to meet community members and host this session on Monday! I love meeting community members, users, and newcomers and hearing their wins and pain points.
Love LangChain overall - just two areas where improvements would make a real difference: 1. Multi-model graphs need better support Building graphs that combine different models (especially mixing thinking and non-thinking models) is harder than it should be. The main issue: each model has its own quirks for how it returns thinking/reasoning tokens, which makes output parsing inconsistent. Would love a unified abstraction that normalizes this across providers. 2. DeepSeek Reasoner integration is broken There’s an open issue with DeepSeek’s reasoning model that needs attention.