We shared how the new Jellyfish API is unlocking endless possibilities for engineering teams. But what does it actually look like in action? We paired it with Amazon Web Services (AWS) Q Business, a GenAI Assistant, to test its potential. The result? Faster API utilization, natural language insights, and streamlined workflows. Ready to explore what’s possible? Check out the blog for all the details. ⬇️
Jellyfish’s Post
More Relevant Posts
-
Google just dropped 5 game-changing AI agents: And they're not just tools. They're your new dev team: 1/ BigQuery Data Agent: ↳ Build pipelines in plain English. ↳ Automate Cloud Storage ingestion. ↳ Ship clean data, zero extra code. 🔗 https://lnkd.in/djh9mJmT. 2/ Notebook Agent: ↳ Automate data exploration. ↳ Handle feature engineering. ↳ Generate ML predictions instantly. 🔗 https://notebooklm.google/. 3/ Looker Code Assistant: ↳ Chat with your data. ↳ Get visual insights + Python code. ↳ Build interactive dashboards faster. 🔗 https://lnkd.in/dssDiKJk. 4/ Database Migration Agent: ↳ Simplify MySQL → Spanner moves. ↳ Cut modernization time in half. ↳ Speed up cloud migrations. 🔗 https://lnkd.in/dycS2s6a. 5/ GitHub Agent: ↳ Streamline repo management. ↳ Let AI handle issue triage. ↳ Accelerate PR reviews. 🔗 https://lnkd.in/dKwyEU3Z. Which AI agent excites you the most? Share below. ♻️ Share with someone who needs to see this.
To view or add a comment, sign in
-
-
These new AI agents are not just a 'game changer'—they are the strategic solution to decades-old data engineering bottlenecks. By fundamentally disrupting the cross-team dependency graph, they pave the way to permanently eliminating those persistent wait times and long-tail code backlogs.
Turning Data Into Better Decisions | Follow Me for More Tech Insights | Technology Leader & Entrepreneur
Google just dropped 5 game-changing AI agents: And they're not just tools. They're your new dev team: 1/ BigQuery Data Agent: ↳ Build pipelines in plain English. ↳ Automate Cloud Storage ingestion. ↳ Ship clean data, zero extra code. 🔗 https://lnkd.in/djh9mJmT. 2/ Notebook Agent: ↳ Automate data exploration. ↳ Handle feature engineering. ↳ Generate ML predictions instantly. 🔗 https://notebooklm.google/. 3/ Looker Code Assistant: ↳ Chat with your data. ↳ Get visual insights + Python code. ↳ Build interactive dashboards faster. 🔗 https://lnkd.in/dssDiKJk. 4/ Database Migration Agent: ↳ Simplify MySQL → Spanner moves. ↳ Cut modernization time in half. ↳ Speed up cloud migrations. 🔗 https://lnkd.in/dycS2s6a. 5/ GitHub Agent: ↳ Streamline repo management. ↳ Let AI handle issue triage. ↳ Accelerate PR reviews. 🔗 https://lnkd.in/dKwyEU3Z. Which AI agent excites you the most? Share below. ♻️ Share with someone who needs to see this. ➕ Follow me, Ashley Nicholson, for more tech insights.
To view or add a comment, sign in
-
-
Migrating to Google Cloud will be easier than ever. Google seems to have nailed the most important use case for AI so far. Using AI to leverage its go-to-market strategy. Microsoft and Amazon Web Services (AWS) are better hurry! Why that? As soon as the data and software engineers learn how to use these AI Agents created for Google Cloud, they will be able to make a migration project for Google Cloud way faster, and therefore cheaper, than migrating to Azure or AWS. A great competitive advantage! Cheaper migration -> More customers -> Increasead ARR! Next move would be training their partners accordingly. Was anyone invited? #agenticAI #cloud #IT
Turning Data Into Better Decisions | Follow Me for More Tech Insights | Technology Leader & Entrepreneur
Google just dropped 5 game-changing AI agents: And they're not just tools. They're your new dev team: 1/ BigQuery Data Agent: ↳ Build pipelines in plain English. ↳ Automate Cloud Storage ingestion. ↳ Ship clean data, zero extra code. 🔗 https://lnkd.in/djh9mJmT. 2/ Notebook Agent: ↳ Automate data exploration. ↳ Handle feature engineering. ↳ Generate ML predictions instantly. 🔗 https://notebooklm.google/. 3/ Looker Code Assistant: ↳ Chat with your data. ↳ Get visual insights + Python code. ↳ Build interactive dashboards faster. 🔗 https://lnkd.in/dssDiKJk. 4/ Database Migration Agent: ↳ Simplify MySQL → Spanner moves. ↳ Cut modernization time in half. ↳ Speed up cloud migrations. 🔗 https://lnkd.in/dycS2s6a. 5/ GitHub Agent: ↳ Streamline repo management. ↳ Let AI handle issue triage. ↳ Accelerate PR reviews. 🔗 https://lnkd.in/dKwyEU3Z. Which AI agent excites you the most? Share below. ♻️ Share with someone who needs to see this. ➕ Follow me, Ashley Nicholson, for more tech insights.
To view or add a comment, sign in
-
-
This post provides a comprehensive hands-on guide to fine-tune Amazon Nova Lite for document processing tasks, with a focus on tax form data extraction. Using our open-source GitHub repository code sample, we demonstrate the complete workflow from...
To view or add a comment, sign in
-
This post provides a comprehensive hands-on guide to fine-tune Amazon Nova Lite for document processing tasks, with a focus on tax form data extraction. Using our open-source GitHub repository code sample, we demonstrate the complete workflow from...
To view or add a comment, sign in
-
Ever wonder what it takes to turn mountains of raw financial data into clear, actionable insights for options traders? At ProfitScout, we've built a fully serverless, AI-powered data pipeline to do just that. Our engine works around the clock, orchestrating a three-stage process to power the signals you see in the app: 1️⃣ Ingestion: We start by pulling in a massive amount of data, including SEC filings (10-Ks, 10-Qs), earnings call transcripts, daily options chain data, financial statements, and real-time market news. 2️⃣ AI Enrichment: This is where the magic happens. We use state-of-the-art AI and large language models to analyze everything. Our models read through filings to assess risk, evaluate the sentiment and tone of earnings calls, and identify key trends in financial statements and market news. 3️⃣ Serving: Finally, the enriched data is aggregated into a single, weighted score for every stock in the Russell 1000. This powers our five-tier analytical view (from "Strongly Bullish" to "Strongly Bearish") and helps us identify the highest-scoring options setups. The result? A powerful, data-driven research platform that helps you go from analysis to a more informed decision. For those interested in the technical details, our entire pipeline is open-source. You can explore the code on GitHub: https://lnkd.in/efX2rJp6 Check out the live results of our pipeline at profitscout.app! #FinTech #AI #OptionsTrading #DataScience #Investing #BigData #Python #Serverless #OpenSource
To view or add a comment, sign in
-
🧩 𝐔𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠 𝐂𝐨𝐧𝐭𝐞𝐧𝐭 𝐂𝐡𝐮𝐧𝐤𝐢𝐧𝐠 𝐢𝐧 𝐀𝐦𝐚𝐳𝐨𝐧 𝐁𝐞𝐝𝐫𝐨𝐜𝐤 𝐊𝐧𝐨𝐰𝐥𝐞𝐝𝐠𝐞 𝐁𝐚𝐬𝐞𝐬 When you upload data to a Knowledge Base in Amazon Bedrock, the system splits your documents into chunks, smaller pieces of text optimized for embedding and retrieval.This process directly affects accuracy, speed, and context in how your Knowledge Base answers questions. 𝐋𝐞𝐭’𝐬 𝐛𝐫𝐞𝐚𝐤 𝐝𝐨𝐰𝐧 𝐭𝐡𝐞 𝐭𝐡𝐫𝐞𝐞 𝐦𝐚𝐢𝐧 𝐜𝐡𝐮𝐧𝐤𝐢𝐧𝐠 𝐦𝐞𝐭𝐡𝐨𝐝𝐬 👇 ⚙️ 1️⃣ 𝐒𝐭𝐚𝐧𝐝𝐚𝐫𝐝 𝐂𝐡𝐮𝐧𝐤𝐢𝐧𝐠 The most common approach. Fixed-size: You define how many tokens each chunk contains and how much overlap exists between them. 𝐃𝐞𝐟𝐚𝐮𝐥𝐭: Automatically splits text into ~300-token chunks, keeping sentences intact. 𝐍𝐨 𝐂𝐡𝐮𝐧𝐤𝐢𝐧𝐠: Each document remains as one chunk (useful if you’ve pre-split your files). 👉 Best for predictable structure and consistent performance. 🧱 2️⃣ 𝐇𝐢𝐞𝐫𝐚𝐫𝐜𝐡𝐢𝐜𝐚𝐥 𝐂𝐡𝐮𝐧𝐤𝐢𝐧𝐠 Organizes content into parent and child chunks. You set the token size for both levels and their overlap. Retrieval starts with precise child chunks and then replaces them with broader parent chunks for full context. 💡 This approach balances precision and context, ideal for long, structured documents. 🧠 3️⃣ 𝐒𝐞𝐦𝐚𝐧𝐭𝐢𝐜 𝐂𝐡𝐮𝐧𝐤𝐢𝐧𝐠 Powered by a foundation model, this method splits text based on meaning, not just size. Configurable parameters include: Max tokens per chunk Buffer size: How many nearby sentences are included for context Breakpoint percentile threshold: Controls how “different” sentences must be to start a new chunk 💰 More accurate retrieval,but also higher cost, since a foundation model is used. 🔹 In short: Standard → Simple and efficient Hierarchical → Balanced and structured Semantic → Intelligent and context-aware Choosing the right chunking strategy depends on your use case ,whether you prioritize speed, context, or semantic precision. #AmazonBedrock #KnowledgeBases #VectorSearch #AIArchitecture #AWS #GenerativeAI
To view or add a comment, sign in
-
-
We split our devs into 2 teams to build our retail AI agent: Vector DB vs Graph DB. Here's how it went 👇 If you voted in the poll on my last post and guessed that the knowledge graph won, you were right. ... sort of. We were seeing far better results from writing our relations in a graph DB versus a vector DB. 2-3 months ago, we officially decided a knowledge graph was the better path for us. BUT. We aren’t using only Neo4j. We’re putting some elements into MongoDB, others in Neo4j, and others into a vector. Using only one tool—whatever it might be—would be detrimental to our speeds over time. So if you guessed hybrid, you were also right! 🏆 You can read the full story of our 2-team experiment at the link in the comments.
To view or add a comment, sign in
-
How does Amazon Q Developer CLI manage your context, and what is happening behind the scenes of the /context command. Let’s decode the application: The Q CLI application is shipped with an internal context management system built around several key components. Files are tracked and distinguished between: 🤖 Agent paths: Files defined in agent configuration (persistent) 📁 Session paths: Files added via /context add (temporary) When you use the "/context add" command, Q CLI: - Validates paths exist (unless --force is used) - Expands glob patterns (e.g., .py, src/**/.js) - Adds paths as session entries The most important part - how files become part of your conversation. Your files are surrounded by special CONTEXT_ENTRY_START_HEADER and CONTEXT_ENTRY_END_HEADER. Your context files are integrated as special-formatted content within the conversation context that gets sent to the AI model: -- CONTEXT ENTRY BEGIN --- [src/main.py] def main(): print("Hello, World!") [src/utils.py] def helper_function(): return "utility" --- CONTEXT ENTRY END --- Context files are sorted alphabetically and deduplicated by filename. Hooks can also contribute to context content with the same header format. The Q CLI application further implements a token management for your context files: - It uses 75% of the model's context window for files - Automatically drops the largest files when limits are exceeded - Provides warnings when files are dropped Some further key implementation details worth knowing for managing your context: Agent-defined files are permanent, while session files are temporary. For agents, they are persistent because they're defined in the agent configuration file, not because of any special caching mechanism. Session context changes don't persist between chat sessions. Files are read fresh for each request (no caching), respecting filesystem permissions. The next time you use /context add, you'll know there's a system working behind the scenes to make your files seamlessly available to the underlying AI model - all while keeping your conversation flowing smoothly. What other aspects of Q CLI do you want me to decode in future parts of this series?
To view or add a comment, sign in
-
-
Over the past few weeks, I ran two hands-on experiments and the results challenged a lot of common assumptions. 📊 1️⃣ I Benchmarked 6 Vector Databases for RAG , Here’s What Surprised Me Most FAISS, Qdrant, Milvus Lite, ChromaDB, Redis, and Pinecone - tested head-to-head with Gemma 300M embeddings. 💡 Takeaway: FAISS is still the fastest, but Qdrant and Milvus Lite are catching up fast 🔗 Read here → https://lnkd.in/gurHNqq7 ⚙️ 2️⃣ Is My RAG System Over-Engineered? I Tested It. Using Groq for a Contextual RAG system, we found that simplicity often wins over complexity. 💡 Takeaway: Hybrid Search (BM25 + embeddings) is a superpower. Rerankers help but can be slow. Simple semantic chunking often beats fancy LLM-based approaches. 🔗 Read here → https://lnkd.in/gKa5id9C ⚖️ In RAG, balance beats obsession. It’s not about the flashiest model or DB — it’s about the right tradeoffs between latency, relevance, and simplicity.
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development