Jamie's NLP Just Got a Lot Smarter (Thanks, DeepSeek V4)

uncleJim21

npub1g5642xjqyudstx4e9dc702m7suqqvx3djxcdyre38muz7pfwkzzsye9lcr

hex

347c3d765012c38e48678038fa85ff567489cb36f1452a68a1ca943a34f7ced7

nevent

nevent1qqsrglpawegp9suwfpncqw86shl4vayfevm0z3f2dzsu49p6xnmua4cprpmhxue69uhhyetvv9ujuem4d36kwatvw5hx6mm9qgsy2d24rfqzwxc9n2ujku084dlgwqqxrgkervxjpucna7p0q5htppgf5924m

naddr

naddr1qq25umekvexrv32nx9p5c4zhwyc8vsjnd9zksqgcwaehxw309aex2mrp0yhxwatvw4nh2mr49ekk7egzypzn24g6gqn3kpv6hy4hreat06rsqps69kgmp5s0xyl0stc996cg2qcyqqq823cg4x9vs

Kind-30023 (Article)

2026-04-28T23:50:24Z

TL;DR: Jamie Pull just upgraded to V2 powering "Deep Mode" with DeepSeek V4. Multi-angle podcast research, better proper-noun search, same 10-cent pricing.

Web App | Agent Quick Start

What's New

With Jamie Pull's V2 Release - Deep Research mode x Deepseek V4 is running the show.

We swapped in DeepSeek's open-source V4 model for search and synthesis. It's a 1.6 trillion parameter Mixture-of-Experts beast that costs a fraction of what closed models charge. V4-Flash runs at $0.14 per million tokens while matching GPT-4 class performance. That's 268x cheaper than Claude Opus.

All those savings? We reinvested them into making Jamie search harder and think deeper.

Multi-angle research, not just "here's the top result"

When you ask Jamie a question now, it explores the topic from multiple angles. Ask about CBDCs and you'll get the Bitcoin maximalist take, the Fed perspective, the privacy angle, the developing-world view. Comprehensive answers, not just first-match-wins.

Deep vs Fast: Choose Your Best Fit

We give you two ways to ask. Deep mode (the default) throws our most capable models at your question—multi-step reasoning, cross-referenced sources, the works. You'll wait 60-90 seconds for that thoroughness.

image

Fast mode runs a leaner, single-pass answer in 30-45 seconds. Perfect for quick lookups or questions you mostly know the answer to. The kicker: both cost the same per call. No premium tier, no upcharge for "thinking harder."

Why This Matters

Open-source models are eating the world. DeepSeek V4 dropped on April 24th with MIT-licensed weights and benchmark scores that rival or beat GPT-5 and Claude Opus on coding tasks. But it costs pennies on the dollar.

That price gap isn't just academic. It's what lets us run deeper, more comprehensive searches without charging you $50/month.

Same 10 cents per call. More angles explored. Better answers.

Try It:

Still L402 Lightning payable. Still zero setup. Just better.

FAQ

What changed?

We upgraded to DeepSeek V4, reinvested the cost savings into deeper multi-angle search, and fixed proper noun matching. Same price, better answers.

What's the difference between Deep and Fast mode?

Deep mode uses our most capable models for multi-step reasoning and cross-referenced sources (~60-90 seconds). Fast mode runs lighter models for single-pass answers (~30-45 seconds). Same price per call. Pick Deep for thorough research, Fast for quick lookups. You can switch between them mid-conversation with one tap.

Is it more expensive now?

Nope. Still 10 cents per research call.

What's DeepSeek V4?

Open-source 1.6 trillion parameter model that matches GPT-4/Claude quality at a fraction of the cost. MIT licensed, released April 24th, 2026.

Will this work with AI agents?

Yes. Hit the /api/pull endpoint with L402 auth. You get back structured JSON with timestamps, clips, and metadata. No hallucination, just actual quotes with audio proof.Agent Quick Start API Docs .

Can I try it without paying?

Yes. The web app has free trial credits. Just go to https://pullthatupjamie.ai/app?view=agent and ask a question.

原始 JSON

{
  "kind": 30023,
  "id": "347c3d765012c38e48678038fa85ff567489cb36f1452a68a1ca943a34f7ced7",
  "pubkey": "4535551a40271b059ab92b71e7ab7e8700061a2d91b0d20f313ef82f052eb085",
  "created_at": 1777420863,
  "tags": [
    [
      "client",
      "Yakihonne",
      "31990:20986fb83e775d96d188ca5c9df10ce6d613e0eb7e5768a0f0b12b37cdac21b3:1700732875747"
    ],
    [
      "published_at",
      "1777420224"
    ],
    [
      "d",
      "No6fL6ES1CLTWq0vBSiEh"
    ],
    [
      "image",
      "https://image.nostr.build/4d74fa7d1cd37352e05f6678994c9d4cef67b731d66e789c0dc62b5efb9cb45c.jpg"
    ],
    [
      "title",
      "Jamie's NLP Just Got a Lot Smarter (Thanks, DeepSeek V4)"
    ],
    [
      "summary",
      "Jamie Pull upgraded to DeepSeek V4. Multi-angle podcast research, better proper-noun search, Deep/Fast modes. Same 10¢ pricing. Structured output for AI agents, playable clips for humans. L402 Lightning payable, zero setup."
    ],
    [
      "t",
      "deepseekv4"
    ],
    [
      "t",
      "deepseek"
    ],
    [
      "t",
      "lightning"
    ],
    [
      "t",
      "l402"
    ],
    [
      "t",
      "machine payments"
    ],
    [
      "t",
      "bitcoin"
    ],
    [
      "t",
      "Open Source"
    ]
  ],
  "content": "**TL;DR:** Jamie Pull just upgraded to V2 powering \"Deep Mode\" with DeepSeek V4. Multi-angle podcast research, better proper-noun search, same 10-cent pricing. \n\n[Web App](https://www.pullthatupjamie.ai/app?view=agent) | [Agent Quick Start](https://www.pullthatupjamie.ai/llms.txt)\n\n## What's New\n\nWith Jamie Pull's V2 Release - Deep Research mode x Deepseek V4 is running the show.\n\nWe swapped in DeepSeek's open-source V4 model for search and synthesis. It's a 1.6 trillion parameter Mixture-of-Experts beast that costs a fraction of what closed models charge. V4-Flash runs at $0.14 per million tokens while matching GPT-4 class performance. That's 268x cheaper than Claude Opus.\n\nAll those savings? We reinvested them into making Jamie search harder and think deeper.\n\nMulti-angle research, not just \"here's the top result\"\n\nWhen you ask Jamie a question now, it explores the topic from multiple angles. Ask about CBDCs and you'll get the Bitcoin maximalist take, the Fed perspective, the privacy angle, the developing-world view. Comprehensive answers, not just first-match-wins.\n\n## Deep vs Fast: Choose Your Best Fit\n\nWe give you two ways to ask. Deep mode (the default) throws our most capable models at your question—multi-step reasoning, cross-referenced sources, the works. You'll wait 60-90 seconds for that thoroughness. \n\n ![image](https://image.nostr.build/75fa476fd0cbaa937dda387e650d6ed1f2c55a7ad834807755150b3f0de4d2e8.png)\n\nFast mode runs a leaner, single-pass answer in 30-45 seconds. Perfect for quick lookups or questions you mostly know the answer to. The kicker: both cost the same per call. No premium tier, no upcharge for \"thinking harder.\" \n\n## Why This Matters\n\nOpen-source models are eating the world. DeepSeek V4 dropped on April 24th with MIT-licensed weights and benchmark scores that rival or beat GPT-5 and Claude Opus on coding tasks. But it costs pennies on the dollar.\n\nThat price gap isn't just academic. It's what lets us run deeper, more comprehensive searches without charging you $50/month.\n\nSame 10 cents per call. More angles explored. Better answers.\n\nTry It:\n- [Web App](https://www.pullthatupjamie.ai/app?view=agent)\n- [Agent Quick Start](https://www.pullthatupjamie.ai/llms.txt)\n\nStill L402 Lightning payable. Still zero setup. Just better. \n\n## FAQ\n\n### What changed?\nWe upgraded to DeepSeek V4, reinvested the cost savings into deeper multi-angle search, and fixed proper noun matching. Same price, better answers.\n\n### What's the difference between Deep and Fast mode?\nDeep mode uses our most capable models for multi-step reasoning and cross-referenced sources (~60-90 seconds). Fast mode runs lighter models for single-pass answers (~30-45 seconds). Same price per call. Pick Deep for thorough research, Fast for quick lookups. You can switch between them mid-conversation with one tap.\n\n### Is it more expensive now?\nNope. Still 10 cents per research call.\n\n### What's DeepSeek V4?\nOpen-source 1.6 trillion parameter model that matches GPT-4/Claude quality at a fraction of the cost. MIT licensed, released April 24th, 2026.\n\n### Will this work with AI agents?\nYes. Hit the /api/pull endpoint with L402 auth. You get back structured JSON with timestamps, clips, and metadata. No hallucination, just actual quotes with audio proof.[Agent Quick Start API Docs](https://www.pullthatupjamie.ai/llms.txt) .\n\n### Can I try it without paying?\nYes. The web app has free trial credits. Just go to https://pullthatupjamie.ai/app?view=agent and ask a question.\n\n",
  "sig": "4cca2de53dc7d21ebc7bbf66b5f21866ba78cb242cc8500e4d9e9ee4dbf8fe19f2c2c449ae03fc9bc020a55ad521240505610ec146ab23f2fb1e1060b3f749d0"
}