According to XDA-Developers, writer Mahnoor Faisal details four specific habits that transformed her use of the AI search tool Perplexity, moving beyond basic prompting. The key strategies include actively using the “Personalization” feature as a system prompt, leveraging “Spaces” to compartmentalize research threads by topic, utilizing pre-built “Templates” for tasks like brainstorming, and regularly “Exporting” full chat threads as Markdown or PDF files to integrate with local note-taking apps like Obsidian or Google’s NotebookLM. These habits, which the author admits adopting later than she should have, turn Perplexity from a casual search replacement into a structured research and learning hub that personalizes results based on user context.
The system prompt is key
Here’s the thing a lot of us miss: the “Personalization” box isn’t just a gimmick. For years, system prompts in consumer AI tools were kind of a joke—they barely worked. But that’s changed. Now, telling Perplexity who you are and what you generally want is arguably the most powerful single setting. It’s the difference between getting generic, one-size-fits-all answers and responses that feel tailored. And the same logic applies within Spaces, where “Custom Instructions” let you dictate tone, format, and even which sources to prioritize. Ignoring this is basically choosing to have a more superficial conversation with the AI. Why wouldn’t you want it to know you?
Organization isn’t just for neat freaks
This was my biggest “aha” moment from the article. I used to treat every Perplexity session like a fresh Google search, too. But an AI chat has memory. Every prompt in a thread influences the next response. So if you’re asking about quantum physics in the morning and pizza recipes at night in the same thread, you’re going to get some weird, contaminated results. Using Spaces as folders for topics is a simple fix that makes a dramatic difference. It forces discipline and keeps the AI’s context focused. It seems like a small thing, but for research, it’s everything. The output just becomes more relevant.
Don’t reinvent the wheel, use templates
I love this because it’s a shortcut to good prompt engineering. Perplexity has around 50 built-in templates for things like “Brainstorm Buddy” or “Interview Prep.” Basically, they pre-fill those custom instruction boxes for you. This is perfect for when you know you want a specific type of interaction but don’t want to craft the perfect system prompt from scratch every single time. It’s a low-effort way to instantly upgrade the quality of your session for a given task. Think of it as choosing the right tool from a toolbox instead of just using a hammer for everything.
The real magic happens after you export
This is the habit that ties everything together and makes Perplexity part of a lasting workflow, not just a transient Q&A. The ability to export a full thread as clean Markdown is a game-changer. You can immediately dump it into Obsidian, Logseq, or any other note-taking app and you’ve got a perfectly formatted research note with all your questions, its answers, and live citations. But the killer app, as the article points out, is pairing it with NotebookLM. Export a bunch of threads on a topic to a folder, point NotebookLM at it, and suddenly you have a custom AI trained on your personal research session. That’s powerful. It turns a search into a buildable knowledge asset.
So look, the core lesson here isn’t really about four random features. It’s about a shift in mindset. You can’t just use Perplexity like a slightly smarter Google. To get real value, you have to engage with it as a context-aware research assistant. That means giving it context (personalization), keeping that context clean (Spaces), using the right frameworks (templates), and preserving the output (export). Do that, and it stops being just another tab in your browser and starts feeling like a core part of your thinking stack.
