{"id":16919,"date":"2026-01-10T10:47:16","date_gmt":"2026-01-10T10:47:16","guid":{"rendered":"https:\/\/thinkpeak.ai\/setting-up-crewai-with-local-llms\/"},"modified":"2026-01-10T10:47:16","modified_gmt":"2026-01-10T10:47:16","slug":"crewaini%cc%87n-yerel-llms-i%cc%87le-kurulmasi","status":"publish","type":"post","link":"https:\/\/thinkpeak.ai\/tr\/crewaini%cc%87n-yerel-llms-i%cc%87le-kurulmasi\/","title":{"rendered":"Yerel LLM'lerle CrewAI Kurulumu - 2026 K\u0131lavuzu"},"content":{"rendered":"<div style=\"background-color: #f0f7ff; border-left: 4px solid #0056b3; padding: 1.5rem; margin-bottom: 2rem; border-radius: 4px;\">\n<p style=\"margin: 0; font-size: 1.1rem; color: #333;\"><strong>Reading Time:<\/strong> ~18 Minutes<br \/>\n    <strong>Target Audience:<\/strong> Technical Founders, Python Developers, and AI Architects.<br \/>\n    <strong>Goal:<\/strong> To build a fully private, zero-cost AI workforce using CrewAI and local LLMs (Ollama), while identifying when to scale to enterprise solutions.<\/p>\n<\/div>\n<h2>Introduction<\/h2>\n<p>The promise of autonomous AI agents is seductive. Imagine a digital workforce that operates while you sleep. They execute complex workflows and scale your output infinitely.<\/p>\n<p>However, many businesses face a significant barrier. It isn&#8217;t technical capability. It is <b id=\"data-sovereignty-and-cost\">data sovereignty and cost<\/b>.<\/p>\n<p>Running a fleet of agents on GPT-4o allows you to burn through an API budget in days. Furthermore, sending proprietary financial data or internal strategy documents to a third-party cloud is often impossible. Regulated industries simply cannot take that risk.<\/p>\n<p>The solution is simple: <b id=\"local-large-language-models\">Local Large Language Models (LLMs)<\/b>.<\/p>\n<p>You can combine <b id=\"crewai-framework\">CrewAI<\/b> with <strong>Ollama<\/strong> or <strong>LM Studio<\/strong>. This allows you to run sophisticated multi-agent systems entirely on your own hardware. No data leaves your machine. No API bills accumulate.<\/p>\n<p>In this guide, we walk through the exact architecture required to set up CrewAI with local LLMs. We move beyond basic tutorials. We address real-world challenges like infinite loops, limited context windows, and optimizing smaller models for complex reasoning.<\/p>\n<h2>Why Run CrewAI Locally? The Business Case for On-Premise Agents<\/h2>\n<p>Before writing code, you must understand the architecture choice. The cloud offers raw power. However, local agents offer control.<\/p>\n<h3>1. Absolute Data Privacy<\/h3>\n<p>For legal, healthcare, and finance sectors, being &#8220;cloud-agnostic&#8221; is not enough. You need to be &#8220;cloud-absent.&#8221; By running Llama 3 or Mistral locally, your data never leaves your intranet.<\/p>\n<p>This enables you to build <b id=\"internal-tools-business-portals\">Internal Tools &#038; Business Portals<\/b>. You can process sensitive contracts or employee data without violating GDPR or HIPAA compliance.<\/p>\n<h3>2. Cost Predictability<\/h3>\n<p>API-based agents have variable costs. An infinite loop in a GPT-4 agent could cost you $50 before you catch it. A local agent costs you nothing but electricity.<\/p>\n<p>This makes local environments the perfect sandbox for <b id=\"custom-ai-agent-development\">Custom AI Agent Development<\/b>. You can iterate 1,000 times on a prompt without spending a dime.<\/p>\n<h3>3. Latency and Offline Availability<\/h3>\n<p>Local agents do not wait for network handshakes. Perhaps you run a <b id=\"custom-low-code-app\">Custom Low-Code App<\/b>. Or maybe you have an edge device in a warehouse with spotty internet. A local agent ensures your logic keeps running.<\/p>\n<blockquote>\n<p><strong>Thinkpeak Insight:<\/strong> We often recommend a &#8220;Hybrid Architecture.&#8221; Use local models for high-volume, low-complexity tasks. Route only high-level strategic reasoning to paid APIs like GPT-4. This balances cost with intelligence.<\/p>\n<\/blockquote>\n<h2>The Hardware Reality Check: What Do You Need?<\/h2>\n<p>Local LLMs are resource-hungry. You cannot run a competent multi-agent crew on a basic laptop. To get usable performance, you need specific hardware.<\/p>\n<h3>The &#8220;Sweet Spot&#8221; Requirements (Recommended)<\/h3>\n<ul>\n<li><strong>CPU:<\/strong> Apple M2\/M3\/M4 Pro or Max (Unified Memory is king), or a modern Intel\/AMD with AVX-512 support.<\/li>\n<li><strong>RAM:<\/strong> <b id=\"32gb-ram-minimum\">32GB is the new minimum<\/b> for multi-agent workflows. A quantized 8B model takes ~6GB VRAM. If you run two agents and an embedding model, 16GB will choke.<\/li>\n<li><strong>GPU:<\/strong> NVIDIA RTX 3060\/4060 (12GB VRAM) or higher.<\/li>\n<li><strong>Storage:<\/strong> NVMe SSD. Loading models into RAM takes seconds vs. minutes on HDD.<\/li>\n<\/ul>\n<h3>Model Selection Guide<\/h3>\n<p>The model you choose dictates your hardware needs.<\/p>\n<table border=\"1\" cellpadding=\"10\" cellspacing=\"0\" style=\"border-collapse: collapse; width: 100%; margin-bottom: 1.5rem;\">\n<thead>\n<tr style=\"background-color: #f2f2f2;\">\n<th style=\"text-align: left;\">Model Class<\/th>\n<th style=\"text-align: left;\">Examples<\/th>\n<th style=\"text-align: left;\">VRAM Req<\/th>\n<th style=\"text-align: left;\">Best For<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Small (7B-9B)<\/strong><\/td>\n<td>Llama 3.1 8B, Mistral 7B<\/td>\n<td>~6-8 GB<\/td>\n<td>Summarization, classification, email drafting.<\/td>\n<\/tr>\n<tr>\n<td><strong>Medium (14B-30B)<\/strong><\/td>\n<td>Gemma 2 27B, Yi 34B<\/td>\n<td>~16-24 GB<\/td>\n<td>Complex reasoning, coding, instruction following.<\/td>\n<\/tr>\n<tr>\n<td><strong>Large (70B+)<\/strong><\/td>\n<td>Llama 3.3 70B<\/td>\n<td>~40-48 GB<\/td>\n<td>Strategic planning, creative writing, nuance.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Step 1: The Engine Room \u2013 Setting Up Ollama<\/h2>\n<p>We recommend <b id=\"ollama-integration\">Ollama<\/b> over LM Studio for CrewAI integration. It is built for developers. It runs as a background service and exposes a clean API.<\/p>\n<h3>1. Installation<\/h3>\n<p><strong>macOS \/ Linux:<\/strong><br \/>\nOpen your terminal and run:<\/p>\n<pre><code>curl -fsSL https:\/\/ollama.com\/install.sh | sh<\/code><\/pre>\n<p><strong>Windows:<\/strong><br \/>\nDownload the installer directly from the official Ollama website.<\/p>\n<h3>2. Pulling Your &#8220;Brains&#8221;<\/h3>\n<p>You need to download the models you intend to use. We will use <b id=\"llama-3-1-8b\">Llama 3.1 (8B)<\/b>. It offers the best balance of speed and reasoning for consumer hardware.<\/p>\n<p>Open your terminal\/command prompt:<\/p>\n<pre><code>ollama pull llama3.1<\/code><\/pre>\n<p><em>Optional: Pull an embedding model if you plan to use RAG (Retrieval Augmented Generation):<\/em><\/p>\n<pre><code>ollama pull nomic-embed-text<\/code><\/pre>\n<h3>3. Verify the Server<\/h3>\n<p>By default, Ollama runs on port <code>11434<\/code>. Verify it is running by visiting <code>http:\/\/localhost:11434<\/code> in your browser. You should see the message &#8220;Ollama is running.&#8221;<\/p>\n<h2>Step 2: The Framework \u2013 Installing CrewAI<\/h2>\n<p>Thinkpeak.ai recommends using a virtual environment. This keeps your dependencies clean. CrewAI updates frequently, and version conflicts are common.<\/p>\n<pre><code># Create a virtual environment\npython -m venv crewai-local-env\n\n# Activate it\nsource crewai-local-env\/bin\/activate  # macOS\/Linux\ncrewai-local-envScriptsactivate     # Windows\n\n# Install CrewAI and the LangChain community tools\npip install crewai crewai-tools langchain-ollama<\/code><\/pre>\n<h2>Step 3: Architecting the Local Crew (Code Walkthrough)<\/h2>\n<p>Now, let\u2019s build a <b id=\"local-market-research-crew\">Local Market Research Crew<\/b>. This crew will consist of two agents:<\/p>\n<ol>\n<li><strong>The Researcher:<\/strong> Scrapes for information.<\/li>\n<li><strong>The Analyst:<\/strong> Summarizes the findings.<\/li>\n<\/ol>\n<p><strong>Note on &#8220;Dumb&#8221; Models:<\/strong> Local models like Llama 8B are not as smart as GPT-4. They struggle with complex tool usage. We must be very explicit in our prompts.<\/p>\n<h3>The Code Configuration<\/h3>\n<p>Create a file named <code>local_crew.py<\/code>.<\/p>\n<pre><code>import os\nfrom crewai import Agent, Task, Crew, Process\nfrom crewai.llm import LLM\n\n# 1. Define the Local LLM\n# We connect to the Ollama instance running on localhost.\n# 'base_url' ensures CrewAI looks at your machine, not OpenAI's servers.\nlocal_llm = LLM(\n    model=\"ollama\/llama3.1\",\n    base_url=\"http:\/\/localhost:11434\"\n)\n\n# 2. Define Your Agents\n# Note the 'verbose=True' - this is crucial for debugging local loops.\n\nresearcher = Agent(\n    role='Local Data Researcher',\n    goal='Uncover detailed information about {topic}',\n    backstory=\"\"\"You are a meticulous researcher who works offline. \n    You excel at finding facts and organizing them clearly. \n    You do not hallucinate information.\"\"\",\n    llm=local_llm,\n    verbose=True,\n    allow_delegation=False # Local models struggle with delegation logic\n)\n\nanalyst = Agent(\n    role='Insight Analyst',\n    goal='Summarize findings into a concise 3-point executive brief.',\n    backstory=\"\"\"You are a senior analyst. You take raw data and convert it \n    into actionable intelligence. You write in a corporate, professional tone.\"\"\",\n    llm=local_llm,\n    verbose=True,\n    allow_delegation=False\n)\n\n# 3. Define Tasks\n# Keep tasks simpler for local models. One clear objective per task.\n\ntask_research = Task(\n    description=\"\"\"Research the key features and pricing of {topic}. \n    Provide a raw list of facts.\"\"\",\n    expected_output=\"A bulleted list of at least 5 key facts about {topic}.\",\n    agent=researcher\n)\n\ntask_analysis = Task(\n    description=\"\"\"Using the research provided, create a summary report. \n    Focus on the \"So What?\" - why does this matter to a business owner?\"\"\",\n    expected_output=\"A 3-paragraph executive summary.\",\n    agent=analyst\n)\n\n# 4. Instantiate the Crew\ncrew = Crew(\n    agents=[researcher, analyst],\n    tasks=[task_research, task_analysis],\n    process=Process.sequential, # Sequential is safer for local reasoning\n    verbose=True\n)\n\n# 5. Kickoff\nprint(\"Starting the Local Crew...\")\nresult = crew.kickoff(inputs={'topic': 'The future of AI Agents in 2026'})\nprint(\"######################\")\nprint(result)<\/code><\/pre>\n<h3>Running the Script<\/h3>\n<p>Run <code>python local_crew.py<\/code>. You will see the agents &#8220;thinking&#8221; in your terminal. Speed depends entirely on your GPU.<\/p>\n<h2>Troubleshooting the &#8220;Local Loop of Death&#8221;<\/h2>\n<p>You may encounter a common issue. The agent starts a task, thinks, and then repeats &#8220;Thinking&#8230;&#8221; forever. You have to hit <code>Ctrl+C<\/code>.<\/p>\n<p>This is the biggest pain point when <b id=\"setting-up-crewai\">Setting up CrewAI with local LLMs<\/b>.<\/p>\n<h3>Why does this happen?<\/h3>\n<p>Small models often fail to generate the specific &#8220;Stop Token.&#8221; They also struggle to format the JSON required for tools. This causes the system to reject their request, creating an infinite loop.<\/p>\n<h3>How to Fix It<\/h3>\n<ol>\n<li><strong>Better System Prompts:<\/strong> Be explicit. Add to the backstory: <em>&#8220;Once you have the answer, you must provide the Final Answer immediately. Do not keep searching.&#8221;<\/em><\/li>\n<li><strong>Use max_iter:<\/strong> CrewAI allows you to cap the attempts.\n<pre><code>researcher = Agent(\n    ...\n    max_iter=5, # Force stop after 5 attempts\n    ...\n)<\/code><\/pre>\n<\/li>\n<li><strong>Upgrade the Model:<\/strong> If Llama 3 8B loops, try <strong>Mistral-Nemo<\/strong> or <strong>Qwen 2.5<\/strong>. These often handle instructions better.<\/li>\n<\/ol>\n<h2>Scaling from Laptop to Enterprise: The &#8220;Hybrid&#8221; Approach<\/h2>\n<p>Running agents on a MacBook is great for prototyping. But you may need to process 5,000 rows of client data. Or perhaps you need to deploy this to a team.<\/p>\n<p>This is where <b id=\"thinkpeak-ai\">Thinkpeak.ai<\/b> bridges the gap.<\/p>\n<p>We transform local proofs-of-concept into <b id=\"bespoke-internal-tools\">Bespoke Internal Tools &#038; Custom App Development<\/b>. Often, the best architecture is <strong>Hybrid<\/strong>:<\/p>\n<ul>\n<li><strong>Tier 1 (Local):<\/strong> Use local agents for high-volume data cleaning, PII redaction, and drafting. This runs free on your internal servers.<\/li>\n<li><strong>Tier 2 (Cloud):<\/strong> When the local agent detects a complex strategic decision, it hands the task to a GPT-4 agent.<\/li>\n<\/ul>\n<h3>Thinkpeak.ai: The Agency Overview<\/h3>\n<p>Thinkpeak.ai is an AI-first automation partner. We transform static operations into dynamic ecosystems. We combine advanced AI agents with robust internal tooling.<\/p>\n<p><strong>What We Specifically Offer:<\/strong><\/p>\n<p>We deliver value through instant deployment and bespoke engineering.<\/p>\n<p><strong>1. The Automation Marketplace (Ready-to-Use Products)<\/strong><br \/>\nFor immediate speed, we provide &#8220;plug-and-play&#8221; templates. These are optimized for Make.com and n8n.<\/p>\n<ul>\n<li><strong>Content &#038; SEO Systems:<\/strong> Our <b id=\"seo-first-blog-architect\">SEO-First Blog Architect<\/b> researches, analyzes competitors, and generates optimized articles.<\/li>\n<li><strong>Operations &#038; Data Utilities:<\/strong> Our <strong>Google Sheets Bulk Uploader<\/strong> cleans and formats thousands of rows of data instantly.<\/li>\n<\/ul>\n<p><strong>2. Bespoke Internal Tools &#038; Custom App Development (Services)<\/strong><br \/>\nThis is the &#8220;limitless&#8221; tier. If business logic exists, we can build the infrastructure.<\/p>\n<ul>\n<li><strong>Custom AI Agent Development:<\/strong> We create &#8220;Digital Employees&#8221; capable of reasoning and executing tasks 24\/7.<\/li>\n<li><strong>Total Stack Integration:<\/strong> We connect your CRM, ERP, and communication tools intelligently.<\/li>\n<\/ul>\n<p>If your local agents are unreliable, our <b id=\"complex-business-process-automation\">Complex Business Process Automation (BPA)<\/b> service can re-architect your workflow.<\/p>\n<p><a href=\"https:\/\/thinkpeak.ai\">Consult with Thinkpeak on Custom Agent Deployment<\/a><\/p>\n<h2>Advanced Configuration: Optimizing Local Performance<\/h2>\n<p>To get the most out of your setup, you need to tweak the parameters.<\/p>\n<h3>1. Context Window Management<\/h3>\n<p>Local models have strict context limits (usually 8k or 32k tokens). If you feed a large PDF to an 8k model, it will crash.<\/p>\n<ul>\n<li><strong>Solution:<\/strong> Use a &#8220;Chunking&#8221; strategy. Break data into small pieces.<\/li>\n<li><strong>Thinkpeak Tip:<\/strong> Our <b id=\"ai-proposal-generator\">AI Proposal Generator<\/b> uses this logic to ingest massive discovery notes without exceeding limits.<\/li>\n<\/ul>\n<h3>2. Temperature Tuning<\/h3>\n<p>Local models have higher <code>temperature<\/code> sensitivity than GPT-4.<\/p>\n<ul>\n<li><strong>For Creative Tasks:<\/strong> Set <code>temperature=0.7<\/code>.<\/li>\n<li><strong>For Logic\/Data Tasks:<\/strong> Set <code>temperature=0.1<\/code>. Local models hallucinate easily. Low temperature forces them to stick to facts.<\/li>\n<\/ul>\n<pre><code>local_llm = LLM(\n    model=\"ollama\/llama3.1\",\n    base_url=\"http:\/\/localhost:11434\",\n    temperature=0.1 # Strict adherence to facts\n)<\/code><\/pre>\n<h3>3. Network Usage (Ollama Binding)<\/h3>\n<p>By default, Ollama binds to <code>localhost<\/code>. To run CrewAI on a container or different machine, you must expose the host.<\/p>\n<p><strong>Linux\/Mac:<\/strong><\/p>\n<pre><code>OLLAMA_HOST=0.0.0.0 ollama serve<\/code><\/pre>\n<p>Change your <code>base_url<\/code> to your desktop&#8217;s IP: <code>http:\/\/192.168.1.XX:11434<\/code>.<\/p>\n<h2>Real-World Use Case: The Local GDPR Compliance Officer<\/h2>\n<p>Let&#8217;s look at a practical application.<\/p>\n<p><strong>Scenario:<\/strong> You have a CSV with 10,000 customer support tickets. You need to identify sentiment. However, the tickets contain PII. You cannot upload this to ChatGPT.<\/p>\n<p><strong>The Solution:<\/strong> A Local CrewAI Setup.<\/p>\n<ol>\n<li><strong>Agent A (The Scrubber):<\/strong> Reads the CSV. It uses Llama 3 to identify and redact PII.<\/li>\n<li><strong>Agent B (The Analyst):<\/strong> Takes the redacted text and categorizes the bug.<\/li>\n<li><strong>Agent C (The Reporter):<\/strong> Compiles the bug list into a clean report.<\/li>\n<\/ol>\n<p>This runs offline. The PII never touches the internet.<\/p>\n<p>Thinkpeak.ai\u2019s <b id=\"bespoke-internal-tools-service\">Bespoke Internal Tools<\/b> service can build this interface for you. We wrap this script in a user-friendly <strong>Softr<\/strong> or <strong>Retool<\/strong> dashboard. Your non-technical team can clean data with one click.<\/p>\n<h2>Conclusion<\/h2>\n<p>Setting up CrewAI with local LLMs is a strategic move. It ensures data sovereignty and operational resilience. By leveraging tools like Ollama and Llama 3, you build powerful agents that respect your privacy.<\/p>\n<p>However, local agents are not magic. They require careful prompt engineering and hardware management.<\/p>\n<p>Whether you deploy a single researcher or a fleet of hybrid agents, the future is automated.<\/p>\n<p><strong>Ready to build your own proprietary software stack?<\/strong><br \/>\nAt <strong>Thinkpeak.ai<\/strong>, we build ecosystems. We provide the infrastructure to turn manual operations into dynamic growth engines.<\/p>\n<p><a href=\"https:\/\/thinkpeak.ai\">Explore Thinkpeak&#8217;s Automation Marketplace<\/a><br \/>\n<a href=\"https:\/\/thinkpeak.ai\">Book a Discovery Call for Custom Engineering<\/a><\/p>\n<h2>Frequently Asked Questions (FAQ)<\/h2>\n<h3>Can I run CrewAI with local LLMs on a Windows laptop?<\/h3>\n<p>Yes, if you have sufficient RAM and a dedicated GPU. While Ollama works on Windows, we recommend WSL2 (Windows Subsystem for Linux) for the smoothest experience. Without a GPU, agents will run on your CPU. This is significantly slower but functional for testing.<\/p>\n<h3>Why does my local agent keep repeating the same thought?<\/h3>\n<p>This is a &#8220;context loop.&#8221; The model fails to recognize it has completed the step. To fix this, lower the <code>temperature<\/code> to 0. Add <code>max_iter=3<\/code> to your Agent definition. Simplify the Task description to be extremely explicit about the &#8220;Stop&#8221; condition.<\/p>\n<h3>Is Ollama better than LM Studio for CrewAI?<\/h3>\n<p>For automated workflows and coding, <b id=\"ollama-vs-lm-studio\">Ollama<\/b> is generally preferred. It is designed as a headless server with a simple API. <strong>LM Studio<\/strong> is excellent for testing models visually. However, Ollama&#8217;s lightweight nature makes it easier to integrate into Python scripts.<\/p>\n<h2>Resources<\/h2>\n<ul>\n<li><a href=\"https:\/\/docs.crewai.com\/en\/concepts\/llms\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/docs.crewai.com\/en\/concepts\/llms<\/a><\/li>\n<li><a href=\"https:\/\/docs.crewai.com\/learn\/llm-connections\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/docs.crewai.com\/learn\/llm-connections<\/a><\/li>\n<li><a href=\"https:\/\/www.fullyda.com\/courses\/crewai-tutorials\/\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/www.fullyda.com\/courses\/crewai-tutorials\/<\/a><\/li>\n<li><a href=\"https:\/\/methodlab.ai\/multi-agent\/how-to-connect-llama3-to-crewai-groq-ollama\/\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/methodlab.ai\/multi-agent\/how-to-connect-llama3-to-crewai-groq-ollama\/<\/a><\/li>\n<li><a href=\"https:\/\/www.youtube.com\/watch?v=4G6MlUxh3q8\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/www.youtube.com\/watch?v=4G6MlUxh3q8<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>CrewAI'yi yerel LLM'lerle (Ollama) kurun. Verileri gizli tutun, API maliyetlerini azalt\u0131n ve arac\u0131lar\u0131 kendi donan\u0131m\u0131n\u0131zda \u00e7al\u0131\u015ft\u0131r\u0131n.<\/p>","protected":false},"author":2,"featured_media":16918,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[104],"tags":[],"class_list":["post-16919","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-agents"],"_links":{"self":[{"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/posts\/16919","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/comments?post=16919"}],"version-history":[{"count":0,"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/posts\/16919\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/media\/16918"}],"wp:attachment":[{"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/media?parent=16919"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/categories?post=16919"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/tags?post=16919"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}