
Alexander ValenchitsOver the past few months, I’ve been deliberately paying for two AI services: Perplexity Pro and Kimi...
Over the past few months, I’ve been deliberately paying for two AI services: Perplexity Pro and Kimi Allegretto. On the surface, that probably looks strange—even to me—but in practice they fill different roles. In this article, I want to share my personal experience: where Perplexity is genuinely stronger than a classic “chat with a model,” where Kimi turns out to be more useful, and what I end up criticizing in each of them.
I regularly do more than just coding—I also spend a lot of time on research and self-education: tracking who is launching what, which pricing plans are appearing, and how AI products and architectures are evolving. On top of that, I’m constantly reading articles, documentation, blogs, and technical deep dives. Standard assistants at the level of an “IDE helper” like Cursor and similar tools are not enough for this: they help write code, but they do not really handle deep research or help build a broad picture of the market.
At some point it became clear that I needed a separate layer in my stack for research and for working with large volumes of text and code. That’s how Perplexity Pro and Kimi Allegretto entered my workflow.
After a month of heavy use, Perplexity stopped feeling like “just another chatbot” and became a dedicated research tool for me. What I liked most was Deep Research: it can pull together insights from dozens of articles, attach source links, and re-check its own conclusions, turning large amounts of information into a coherent overview of a topic. If you give it meaningful context—not “find something somewhere on the internet,” but a clearly defined task with boundaries and nuances—it can produce a detailed and careful analysis.
A separate story is the skills system. Perplexity lets you connect custom skills, including ones found on GitHub, literally as a single file, and then simply tell the model to “use this skill.” I tested this for SEO optimization and various research workflows, and the combination of Deep Research plus skills gives a very noticeable boost in depth and analysis quality.
From a developer’s perspective, I was also impressed by Perplexity Computer with Anthropic’s latest model. It is a separate paid mode available on higher-tier plans, and it launches a full agent with access to the browser, files, and codebase. When I pointed it at a real Git repository and gave it the necessary skills, I got a refactoring result that genuinely felt like it had been done by a human developer: it reviewed the code, organized everything clearly, and ultimately proposed a pull request with understandable changes.
There is also the separate Personal Computer initiative. Perplexity recently announced the Personal Computer program and opened a waitlist: the idea is to build an “AI operating system” that lives on your Mac, continuously works with local files and apps, and orchestrates tasks in the background. Naturally, I joined the waitlist, but for now it is still a closed product, and the timeline and chances of getting access look rather vague.
I also want to highlight Comet, Perplexity’s own browser. It is not just a Chromium fork, but a browser with a built-in assistant that understands page context and can work across multiple tabs like an agent. In practice, it feels like something between Yandex Browser with AI features and a standalone agentic environment.
As a translator, Comet behaves confidently: it understands context well and preserves both structure and meaning during translation, so reading articles and documentation in other languages feels comfortable. You can also use it as an agent—give it a task like “go through several pages, collect the facts, make a summary, and draft an article or a table,” and Comet will actually move across tabs, extract data, and return something already structured and meaningful. That saves time when you need to turn a set of links into text or organized data quickly.
That said, Comet has one important limitation: it cannot translate videos such as YouTube clips, while Yandex Browser can at least generate a rough Russian translation after a pause—often enough to understand the gist.
In terms of multimodality, Perplexity looks quite decent: thanks to built-in Gemini and GPT models, it can produce reasonable images and basic videos, especially when you need complex prompts or text rendered inside images. But there are still some unclear generation limits tied to plan level and credits, so it does not really work as a primary “visual content factory.” My practical conclusion is that this is a useful layer on top of research, not a replacement for dedicated graphics or video tools.
Despite all the strengths, Perplexity has several drawbacks that are critical for me.
First, there is the complete lack of transparency around limits. As a user, I do not fully understand how many Deep Research runs, Computer sessions, or tokens I still have left: the dashboard does not provide a clear enough usage overview. Sometimes the service suddenly refuses access to certain features in the middle of the week, even though the billing month is not over yet, and from the outside it looks like random disabling without a clear counter. If you care about planning workload and budget, this is much more irritating than it should be.
Second, there is the pricing structure. Right now, there is basically a reasonably priced Pro plan and then a jump straight to Max/Computer-level plans at around $200+ per month, where Computer access and much higher limits are included.
Third, there is the lack of native VS Code integration. Compared with Kimi, which has Kimi Code and an official IDE extension, Perplexity feels slightly disconnected from my day-to-day development environment. If they introduced a middle-tier plan with basic Computer access plus an official VS Code plugin, Perplexity could become not just an answer engine for me, but a real infrastructure layer around development.
I started using Kimi Allegretto only recently, but even after a couple of days it became clear that this is not just a chat app—it is more of an ecosystem built around the K2.6 model.
For a developer, the first thing that stands out is Kimi Code with its native VS Code extension: you can discuss code directly inside the editor, inspect diffs, and apply changes without switching back to the browser. That feels natural because the assistant lives in the same place as the main workflow.
Another important part of the ecosystem is Kimi Claw. In essence, it is a persistent agent that lives in its own workflow space, has its own folder, stores files, and accumulates experience and skills as work progresses. Unlike disposable chat sessions, Claw builds a knowledge base around your tasks, which makes it easier to return later and continue the flow instead of starting from scratch.
At the platform level, Kimi feels cohesive: there is a desktop client, mobile apps, and a web interface, so you can access the same agent from any device. The ecosystem also includes web tools for internet access and several agent modes—from document and slide processing to Sheets integrations and Agent Swarm scenarios. I have not had time to test everything yet, but the platform clearly aims to be broader than a simple chat app.
One specific feature that pleasantly surprised me was the agent mode for websites. In a simple “describe what you want” format, Kimi can produce a pretty decent UI: it can think through page structure, navigation, section copy, and generate interface code. This is not design-award material, of course, but for prototypes and first product versions the output is already usable and feels roughly on the level of modern AI tools in the Vercel/design-to-code ecosystem.
Compared with Perplexity, Kimi’s pricing feels especially pleasant. There are several tiers, and you can choose a comfortable “middle” package with higher limits instead of jumping straight into something that feels enterprise-grade.
That is the plan I chose, and so far the limits seem sufficient. The dashboard is also more transparent: it shows how many requests and tokens have already been used, so you can actually see where your quota is going and plan when to run heavy agent tasks or long sessions.
Kimi also has some serious limitations.
First, there is the language issue. The ecosystem is clearly optimized for English and Chinese first, and that is noticeable in practice. In my case, Kimi did not handle Russian dictation at all in the way I expected, so I either have to switch to English or type manually. If a large part of your workflow depends on Russian or other less-supported languages, Kimi currently feels much less convenient.
Second, there are the limitations of Kimi Code in VS Code. In the current implementation, you can only keep one active agent per window: when you launch a new task, the previous one effectively stops. Compared with tools like Codex or Cursor, where parallel development contexts are normal, this is a real step backward in convenience when juggling multiple tasks and repositories at once.
Third, there is speed. Compared with Codex and Cursor on some tasks, Kimi feels noticeably slower, sometimes by a large margin. The main reason seems to be the thinking mode: when it is enabled, the model produces long reasoning chains. If you turn thinking off, Kimi does become faster, but at the cost of quality: the answers become more superficial, so you constantly have to choose between depth and speed.
Finally, there is one very practical point: visual tasks burn through limits quickly. In my case, a test website design with 3–4 pages in agent mode consumed around 4–5% of the monthly Allegretto limit, which feels expensive for that scope. Video was even worse: two test runs produced very weak results but consumed roughly a third of the monthly limit. By contrast, normal code-related work seems much more economical. So my practical conclusion is simple: in Kimi, multimodality—especially video—does not currently justify the limit cost, and the tool makes the most sense primarily for text and code.
| Criterion | Perplexity Pro | Kimi Allegretto |
|---|---|---|
| Main focus | Answer engine and research: Deep Research, citations, verifiable answers | Long context, agents, and a working environment around code and documents |
| Web and sources | Strong live web search, neat citations, structured summaries | Web tools exist, but less emphasis on transparent source citation |
| Browser | Comet as a browser with an assistant and translator, but no video translation | More classic browser-style workflows through web tools |
| Code and refactoring | Skills plus higher-end agent workflows for repository analysis and PR-style refactoring | Kimi Code with VS Code integration, plus persistent agents such as Claw |
| Multimodality | Good images and basic video, but with plan and credit limits | Available, but in my experience expensive in terms of limits |
| Plans and limits | Pro around $20/month, Max around $200/month, and the jump between them is steep | Multiple tiers, with Allegretto as a workable middle option |
| IDE integration | No native VS Code plugin publicly highlighted in current plan materials | Kimi Code includes VS Code-oriented workflows |
| Languages and dictation | Better suited to multilingual reading and research in my workflow | Stronger for English/Chinese, weaker for Russian dictation in my use |
| Speed | Generally fast, with tolerable delay even in heavier modes | Slower in thinking mode, with more trade-off between speed and depth |
What will ultimately stay in my stack, I still have not fully decided. For now, I’m going to keep using both in parallel, compare how they handle the same tasks, and if anything particularly interesting comes out of that, I’ll return with another article.