نظرة عامة
رصد مجتمع Hacker News هذا الخبر الذي حصد 13 نقطة و2 تعليق خلال ساعات قليلة، مما يجعله من أبرز أخبار الذكاء الاصطناعي اليوم. المصدر الأصلي: kelet.ai.
في هذا المقال نستعرض أبرز ما جاء في هذا الخبر، تحليله من منظور عربي، وما يعنيه للمستخدمين العرب المهتمين بأدوات الذكاء الاصطناعي.
التفاصيل
I've spent the past few years building 50+ AI agents in prod (some reached 1M+ sessions/day), and the hardest part was never building them — it was figuring out why they fail.<p>AI agents don't crash. They just quietly give wrong answers. You end up scrolling through traces one by one, trying to find a pattern across hundreds of sessions.<p>Kelet automates that investigation. Here's how it works:<p>1. You connect your traces and signals (user feedback, edits, clicks, sentiment, LLM-as-a-judge, etc.) 2. Kelet processes those signals and extracts facts about each session 3. It forms hypotheses about what went wrong in each case 4. It clusters similar hypotheses across sessions and investigates them together 5. It surfaces a root cause with a suggested fix you can review and apply<p>The key insight: individual session failures look random. But when you cluster the hypotheses, failure patterns emerge.<p>The fastest way to integrate is through the Kelet Skill for coding agents — it scans your codebase, discovers where signals should be collected, and sets everything up for you. There are also Python and TypeScript SDKs if you prefer manual setup.<p>It’s currently free during beta. No credit card required. Docs: <a href="https://kelet.ai/docs/" rel="nofollow">https://kelet.ai/docs/</a><p>I'd love feedback on the approach, especially from anyone running agents in prod. Does automating the manual error analysis sound right?
المصدر الأصلي
هذا الخبر مأخوذ من منصة Hacker News — المجتمع التقني الأكثر متابعة في العالم.