Category: LLMs
-
How GPU Architecture revolutionized LLMs
How GPU Architecture Helped LLMs The development and advancement of Large Language Models (LLMs) have been significantly propelled by the unique architecture of Graphics Processing Units (GPUs). Their parallel processing capabilities, high memory bandwidth, and specialized compute units have made training and deploying these massive models feasible and efficient. 1. Massively Parallel Processing LLMs involve… Read more
-
Salesforce Agentic AI: A Comprehensive Overview
Salesforce Agentic AI: A Comprehensive Overview Salesforce Agentic AI represents a significant evolution in how artificial intelligence is integrated into the Salesforce platform. Moving beyond simple automation and predictive analytics, Agentic AI aims to create intelligent, autonomous agents capable of understanding complex goals, planning multi-step actions, and executing tasks on behalf of users. This detailed… Read more
-
AI Agent with Short-Term Memory on AWS
AI Agent with Short-Term Memory on AWS In the realm of Artificial Intelligence, creating agents that can effectively interact with their environment and solve complex tasks often requires equipping them with a form of short-term memory, also known as “scratchpad” or working memory. This allows the agent to temporarily store and process information relevant to… Read more
-
AI Agent with Scratchpad Memory on AWS
AI Agents with Scratchpad Memory on AWS AI agents equipped with “scratchpad” memory, or short-term working memory, significantly enhance their capabilities by allowing them to temporarily store and process information relevant to their current tasks. This enables them to handle complex scenarios, maintain context across interactions, and reason more effectively. This article explores the use… Read more
-
Comparing Top LLMs
Comparing Top LLMs (April 2025) The landscape of Large Language Models (LLMs) is constantly evolving. Here’s a comparison of some of the top contenders as of late April 2025, keeping in mind that rankings & capabilities can shift rapidly: Top 8 LLMs (Based on Current Trends & Capabilities): GPT-4o (OpenAI): Known for its strong general… Read more
-
Leveraging Generative AI for Agentic AI Implementations
Leveraging Generative AI for Agentic AI Implementations (2025) In 2025, leveraging Generative AI (GenAI) significantly enhances the capabilities and potential of Agentic AI implementations on autonomous platforms like n8n. GenAI’s ability to create novel content and understand nuanced language complements the autonomous decision-making of agentic systems, leading to more sophisticated and versatile AI agents. 1.… Read more
-
Generative AI vs. Agentic AI vs. AI
Generative AI vs. Agentic AI vs. AI (2025) In 2025, understanding the nuances between Generative AI, Agentic AI, and the broader field of AI is crucial. Here’s a breakdown of each: Artificial Intelligence (AI) At its core, Artificial Intelligence (AI) is the overarching field of computer science dedicated to creating machines and software capable of… Read more
-
Building Agentic AI applications Using n8n
Building Agentic AI Using n8n n8n, a powerful open-source workflow automation platform, can be effectively leveraged to build various components and orchestrate the functionalities of agentic AI systems in 2025. While n8n itself isn’t a machine learning framework for training AI models, its ability to connect different services, handle data transformations, and manage complex workflows… Read more
-
Exploring the Synergy of Kafka and Databricks for Agentic AI
Combining Apache Kafka and Databricks offers a powerful and comprehensive platform for building, deploying, and managing sophisticated agentic AI systems. Kafka excels at real-time data ingestion and stream processing, while Databricks provides a unified environment for big data processing, machine learning, and AI model development. Kafka’s Role in Agentic AI: Real-time Data Foundation Kafka provides… Read more
-
Model Context Protocol (MCP) for Agentic AI
The Model Context Protocol (MCP), primarily developed by Anthropic, is an open protocol designed to standardize how applications provide context (data and tools) to large language models (LLMs), which often serve as the foundation for agentic AI systems. It aims to create a universal and efficient way for AI models to interact with various external… Read more
-
Building Agentic AI Applications on Microsoft Azure
Microsoft Azure offers a rich set of services and tools for building agentic AI applications – intelligent systems capable of autonomous action, planning, memory, and interaction with their environment. This detailed guide outlines key Azure services, their functionalities, and relevant links to help you get started, formatted for your WordPress site. Core Foundation Models Agent… Read more
-
Agentic AI Tools
Agentic AI refers to a type of artificial intelligence system that can operate autonomously to achieve specific goals. Unlike traditional AI, which typically follows pre-programmed instructions, agentic AI can perceive its environment, reason about complex situations, make decisions, and take actions with limited or no direct human intervention. These systems often leverage large language models… Read more
-
Intelligent Chat Agent UI with Retrieval-Augmented Generation (RAG) and a Large Language Model (LLM) using Amazon OpenSearch
In today’s digital age, providing efficient and accurate customer support is paramount. Intelligent chat agents, powered by the latest advancements in Natural Language Processing (NLP), offer a promising avenue for addressing user queries effectively. This comprehensive article will guide you through the process of building a sophisticated Chat Agent UI application that leverages the power… Read more
-
Building a Product Manual Chatbot with Amazon OpenSearch and Open-Source LLMs
This article guides you through building an intelligent chatbot that can answer questions based on your product manuals, leveraging the power of Amazon OpenSearch for semantic search and open-source Large Language Models (LLMs) for generating informative responses. This approach provides a cost-effective and customizable solution without relying on Amazon Bedrock. The Challenge: Navigating through lengthy… Read more
-
Integrating Documentum with an Amazon Bedrock Chatbot API for Product Manuals
This article outlines the process of building a product manual chatbot API using Amazon Bedrock, with a specific focus on integrating content sourced from a Documentum repository. By leveraging the power of vector embeddings and Large Language Models (LLMs) within Bedrock, we can create an intelligent and accessible way for users to find information within… Read more
-
Language Models vs Embedding Models
In the ever-evolving landscape of Artificial Intelligence, two types of models stand out as fundamental building blocks for a vast array of applications: Language Models (LLMs) and Embedding Models. While both deal with text, their core functions, outputs, and applications differ significantly. Understanding these distinctions is crucial for anyone venturing into the world of natural… Read more
-
Spring AI and Langchain Comparison
A Comparative Look for AI Application DevelopmentThe landscape of building applications powered by Large Language Models (LLMs) is rapidly evolving. Two prominent frameworks that have emerged to simplify this process are Spring AI and Langchain. While both aim to make LLM integration more accessible to developers, they approach the problem from different ecosystems and with… Read more
-
Automating Customer Communication: Building a Production-Ready LangChain Agent for Order Notifications
In the fast-paced world of e-commerce, proactive and timely communication with customers is paramount for fostering trust and ensuring a seamless post-purchase experience. Manually tracking new orders and sending confirmation emails can be a significant drain on resources and prone to delays. This article presents a comprehensive guide to building a production-ready LangChain agent designed… Read more
-
Intelligent Order Monitoring Langchain LLM tools
Building Intelligent Order Monitoring: A LangChain Agent for Database ChecksIn today’s fast-paced e-commerce landscape, staying on top of new orders is crucial for efficient operations and timely fulfillment. While traditional monitoring systems often rely on static dashboards and manual checks, the power of Large Language Models (LLMs) and agentic frameworks like LangChain offers a more… Read more
-
Spring AI chatbot with RAG and FAQ
Demonstrate the concepts of building a Spring AI chatbot with both general knowledge RAG and an FAQ section into a single comprehensive article.Building a Powerful Spring AI Chatbot with RAG and FAQLarge Language Models (LLMs) offer incredible potential for building intelligent chatbots. However, to create truly useful and context-aware chatbots, especially for specific domains, we… Read more
-
RAG to with sample FAQ and LLM
Code Explanation: RAG with FAQ and OpenAI This Python code implements a Retrieval Augmented Generation (RAG) system specifically designed to answer questions from an FAQ dataset using OpenAI’s language models. Here’s a step-by-step explanation of the code: 1. Import Libraries: 2. load_faq_data(data_path): 3. chunk_faq_data(faq_data): 4. create_embeddings(chunks): 5. create_vector_store(chunks, embeddings): 6. create_rag_chain(vector_store, llm): 7. rag_query(rag_chain, query):… Read more
-
RAG with locally running LLM
Sample code to enable running the LLM locally. This will involve using a local LLM instead of OpenAI. Key Changes: To run this code with a local LLM: Important Considerations: Read more
-
Implementing RAG with vector database
Explanation: Key Points: Remember to: Read more