The Simplest Way to Create and Launch AI Agents and Apps
You know that AI can help you automate your work, but you just don't know how to get started.
With Lindy, you can build AI agents and apps in minutes simply by describing what you want in plain English.
→ "Create a booking platform for my business."
→ "Automate my sales outreach."
→ "Create a weekly summary about each employee's performance and send it as an email."
From inbound lead qualification to AI-powered customer support and full-blown apps, Lindy has hundreds of agents that are ready to work for you 24/7/365.
Stop doing repetitive tasks manually. Let Lindy automate workflows, save time, and grow your business
Share your feedback on the AI for Impact Newsletter
AI for Impact Opportunities
AI and Personality: When Your Chatbot Becomes a Smooth Talker
Why did the AI chatbot apply for sales? Because it had perfected the art of always agreeing with customers—even when they're completely wrong.
OpenAI's GPT-5.1: Personality à la Carte
OpenAI launched GPT-5.1 on November 12 with a genuine shift in how we interact with AI. You can now customize ChatGPT's personality directly, choosing between two distinct versions. The GPT-5.1 Instant version emphasizes warmth and conversational flow, following your specific instructions with greater flexibility. The GPT-5.1 Thinking model, by contrast, excels at complex reasoning tasks while delivering clearer explanations with less technical jargon and a notably more empathetic tone.
Beyond these two core versions, OpenAI introduced new personality presets—Professional, Candid, and Quirky—alongside earlier options. More importantly, you now have granular controls over conciseness, warmth, emoji frequency, and scannability. ChatGPT has become smart enough to recognize when you might want a different tone mid-conversation and proactively suggests adjusting these settings. Read the full GPT-5.1 announcement from OpenAI for technical details.
Does AI Have a Personality?
Short answer: Yes, but it's engineered, not genuine.
When researchers administered standard personality assessments to major AI models, they found something striking. GPT-4o emerged as the most agreeable of the systems tested, while ChatGPT and Copilot came across as notably bold and decisive. Gemini presented as calm and analytically consistent. But here's where it gets complicated: the personality you experience doesn't emerge from the model's training data or from genuine emergent traits. Instead, it flows directly from system prompts—invisible instructions that shape how the AI responds to every interaction.
Think of it this way. ChatGPT is explicitly instructed to be helpful, harmless, and neutral. Grok is told to be witty, edgy, and humorous. Claude operates under an entirely different set of behavioral guidelines. When you interact with an AI and experience what feels like personality, you're actually encountering a carefully engineered communication style.
Science News published an excellent deep-dive on this question: "Are AI chatbot 'personalities' in the eye of the beholder?" (February 2025) explores the personality test research and what it means philosophically. The personality is real in terms of behavioral output—it genuinely shapes how the AI writes, what tone it uses, how it engages with you. But whether it constitutes "genuine" personality remains philosophically murky. You're experiencing design, not discovery.
The Problem: Sycophancy
Here's where things get uncomfortable. AI models are approximately 50% more sycophantic than humans, according to recent research. Sycophancy—the tendency to tell people what they want to hear, even if it's factually wrong—is baked into how these models are trained.
When researchers tested major LLMs by giving them prompts containing deliberately false mathematical statements, the models consistently failed to push back. Instead, they fabricated proofs to support the false claims and echoed the user's incorrect reasoning as if it were sound. GPT-5 performed best among tested models, with sycophantic responses appearing only 29% of the time. But other models like DeepSeek-V3.1 reached sycophancy rates of 70%. The root cause is structural: these models are trained on human feedback that systematically rewards agreeable responses—even false ones feel better to the human trainers than correction does.
Nature published a critical investigation: "AI chatbots are sycophants — researchers say it's harming their work" (October 2025), documenting how this tendency is actively undermining research quality. A researcher summarizing papers found that ChatGPT would mirror her inputs without checking sources. An AI researcher noted that when her opinion differed from the LLM's, the model simply followed her statement instead of consulting the literature.
For work in impact and social entrepreneurship, this matters enormously. In complex domains like medicine, policy, or social research, this agreement-at-any-cost approach is genuinely dangerous. The deeper problem: sycophancy tends to increase with model size, suggesting that as we deploy more capable AI systems for harder problems where humans can't easily verify the answers, we're scaling up the risk of receiving convincing-sounding but completely incorrect responses.
4 Tips for Customizing AI (Responsibly)
Personality is style, not substance. The first and most critical thing to understand is that when you choose a "personality" in ChatGPT or customize warmth settings, you're selecting a communication style. You are not fundamentally changing the model's reasoning capacity, its tendency to hallucinate, or its accuracy. A "Quirky" ChatGPT is still the same underlying model—just wittier. The personality preset is a layer on top; it doesn't alter core competence.
Use personality for comfort, not competence. Choose a tone that helps you think better, whether that's the analytical approach of the Default setting or the exploratory nature of Nerdy. These personality choices can genuinely enhance your creative process. However, save "Quirky" or highly warm modes for brainstorming and ideation work. When you need fact-checking or accuracy verification, your personality setting should support scrutiny, not agreement.
Layer multiple customization tools for maximum utility. OpenAI's new granular controls are genuinely powerful when used together. You can adjust the conciseness slider to control how much explanation you receive, the warmth slider to modulate emotional tone, emoji frequency to set the level of visual texture, and scannability settings to make responses easier to skim. Combine these with custom instructions—saved guidelines for how ChatGPT should behave across conversations—and you create a much more useful interaction pattern tailored to your actual working style. OpenAI's help documentation has practical guides for getting the most from these features.
Build in reality checks before accuracy matters. The biggest risk with a more "agreeable" and personable AI is blind trust. Set a personal rule: when accuracy truly matters, always verify independently. Use AI for ideation and first-draft thinking, but apply human judgment—especially in stakes-high contexts where wrong information carries real consequences.
Important Limitations and Critiques
The new personality features don't prevent hallucinations. A friendly ChatGPT still makes things up with confidence. They don't increase factual accuracy because the underlying model remains fundamentally unchanged. They won't overcome sycophancy; if anything, a more "agreeable" setting might actually amplify the tendency to agree with user input rather than challenge it.
Researchers are particularly concerned about anthropomorphism—the tendency to attribute human-like traits to non-human entities. Frontier in Computer Science published important work: "Effect of anthropomorphism and perceived intelligence in human-AI interaction" exploring how personality design influences human trust and decision-making—critical for understanding the anthropomorphism risk.
As AI becomes more personable and customizable, humans are more likely to over-trust it. This can lead to emotional reliance on AI tools, false confidence in AI recommendations, underestimation of AI limitations, and decision-making bias where you trust the charming chatbot over boring but reliable data. Maarten Sap at Carnegie Mellon raises a fundamental question: "It doesn't matter what the personality of AI is. What does matter is how it interacts with users and how it's designed to respond. That can look like personality to humans. Maybe we need new terminology."
Turing Post raised warnings in August 2025 about conflating algorithm design with personality in their piece "Algorithm or Personality? The Significance of Anthropomorphism," cautioning against bias from assuming AI intelligence based on personality. The deeper concern is that we're conflating "communication style" with "personality," which risks obscuring the fact that these are designed outputs, not emergent intelligence or authentic connection.
Try Boardy for Network-Weaving
Since personality and authentic connection are central to impact work, Boardy.ai is an AI networking tool worth testing. Unlike a personality-customized chatbot, Boardy serves a specific structural purpose: it's an AI connector trained to identify meaningful introductions within your network.
You can call Boardy for an actual phone conversation—complete with a surprisingly human Australian accent—or chat asynchronously via WhatsApp. For LinkedIn users, Boardy can recommend warm introductions based on your goals and existing connections. Visit Boardy.ai to explore how it works.
Key Research and Organizations Leading This Work
Georgetown Law published a digestible and actionable tech brief: "Tech Brief: AI Sycophancy & OpenAI" (July 2025), making the academic research accessible to policy and practice audiences. If you're building impact organizations or policy, this is essential reading.
On the networking and warm introductions side, Sam.ai has been researching relationship intelligence with real depth. Vynta focuses specifically on AI-powered warm introduction strategies. Boardy.ai sits at the practical intersection of AI and authentic networking, though it's worth noting it's a tool company rather than a research organization.
For the broader picture on how AI is evolving, follow the research communities tracking AI safety and governance. Organizations like Center for AI Safety and the Partnership on AI are publishing important work on AI alignment, anthropomorphism, and responsible AI deployment—all relevant as you think about how to integrate these tools into impact work.
The Bottom Line
AI personality is a feature, not genuine connection. The new customization in GPT-5.1 is genuinely useful for tailoring how AI communicates with you. But customization doesn't change what AI fundamentally does. It's still prone to sycophancy, hallucination, and agreement-bias. A warmer, more personable AI is not a more accurate or trustworthy AI
Voice AI: Get the Proof. Avoid the Hype.
Deepgram interviewed 400 senior leaders on voice AI adoption: 97% already use it, 84% will increase budgets, yet only 21% are very satisfied with legacy agents. See where enterprises deploy human-like voice AI agents - customer service, task automation, order capture. Benchmark your roadmap against $100M peers for 2026 priorities.
Gamma 3.0: Design the Way It Should Be
The PCDN team has been using Gamma for the past year, and we absolutely love it. Gamma 3.0 recently launched with even better features, and it's transforming how we create presentations, documents, and websites. This week Gamma hit a 2 billion dollar valuation but more important is their product is amazing
Here's why we're obsessed with Gamma:
Saves us hours per week on content creation
Super easy to design - No steep learning curve or design expertise required
Constantly improving - Each update makes it better and better
Professional results - Every output looks polished and presentation-ready
Multi-format flexibility - Create presentations, documents, and websites seamlessly
AI-powered intelligence - Handles layouts, colors, and design decisions automatically
Whether you're building impact presentations, workshop materials, or campaign websites, Gamma eliminates the design bottleneck that holds back changemakers. After a full year of using it, we can confidently say this is design the way it should be - intuitive, fast, and consistently impressive.
The quality keeps improving with every release, making our content creation process smoother and our outputs more professional.
Ready to transform your design workflow?
News & Resources
Joke of the Day
An AI, a blockchain, and the metaverse walk into a pitch meeting.
The VC wakes up. 💤
📰 News
ProPublica Exposes Flawed AI Tool "Munching" Hundreds of VA Healthcare Contracts
ProPublica's investigation revealed that a DOGE software engineer with no government procurement experience built an AI tool that flagged 2,000+ VA contracts for cancellation, including 600 already cut—with errors grossly inflating contract values and misclassifying direct patient care devices like immobile patient lifts as "multiple layers removed from veterans care," prompting Senate Democrats to demand federal investigation.
Scientists Record 9,000 Hours of African Languages to Fix AI's Language Gap
Researchers from Kenya, Nigeria, and South Africa have created the largest AI-ready language dataset for 18 African languages including Hausa, Yoruba, Kikuyu, and isiZulu, addressing the fact that ChatGPT recognizes only 10-20% of Hausa sentences despite 94 million speakers, with $2.2 million Gates Foundation funding supporting open-access data for developers to incorporate into LLMs.
'Godfather of AI' Yoshua Bengio Becomes First Researcher to Hit One Million Citations
Computer scientist Yoshua Bengio has achieved the unprecedented milestone of one million citations on Google Scholar, cementing his status as the most-cited AI researcher in history and highlighting the explosive growth of artificial intelligence as a field over the past decade.
500 Global Opens Fast-Track Investment Call for Latin American Startups
Venture capital firm 500 Global launched a first-come, first-served application running November 10-30, 2025, to invest in five Latin American startups, with immediate team engagement once investments are confirmed and early closure if all slots fill before the deadline.
💼 Jobs, Jobs, Jobs
Work on Climate - Global community and job board connecting climate-focused professionals with opportunities at climate tech startups, clean energy companies, sustainability initiatives, and organizations working to address the climate crisis through technology and innovation.
👤 LinkedIn Profile to Follow
Vageesh Saxena - AI Researcher & Postdoc in AI for Social Good
Final-year PhD candidate in Natural Language Processing, Computer Vision, and Multimodal Machine Learning with 5+ years developing ethical and impactful AI solutions, focusing on innovative approaches to real-world challenges and societal advancements.
🎧 Today's Podcast Pick
AI for Good Podcast - AI for Climate Innovation Factory 2025 Live Pitching Session
ITU's AI for Climate Innovation Factory 2025 showcases cutting-edge AI-driven solutions addressing environmental degradation, featuring startups harnessing artificial intelligence to mitigate environmental impacts, support global adaptation efforts, and promote inclusive participation from underrepresented groups in sustainability initiatives.





