Helpful AI, not surveillance AI.
Pulse is the pulse of your company. It’s designed to make every employee’s job easier and more contextual. A class of features would technically work but are ethically out of bounds for us. They corrupt the data, they corrode trust, and they break the product. The list below is non-negotiable.
We will not build
Individual performance scoring
Triggers labor law in most jurisdictions; makes employees fear the platform; sabotages the data quality.
Productivity rankings + speed metrics
Surveillance posing as analytics. Track records show it correlates with discrimination, not improvement.
'Underperformer' detection
AI-driven employment decisions are an ethical and legal minefield. We refuse.
Speaking-time leaderboards
Same family. Conversational dynamics aren't a fitness metric.
Auto-replace-the-employee suggestions
Whatever it sounds like, it's worse.
Auto-scheduled meetings
Suggesting is fine; auto-scheduling makes humans hostile to the tool. We learned this.
Instead, we build
Manager second brain
Helps managers track commitments, surface 1:1 questions, draft positive feedback. The manager is the agent. The AI assists.
Quiet contributor detection
Surfaces high-impact invisible work for recognition, never for management discipline. Roll-up only, no rankings.
Skill graph for staffing & mentoring
Used for matching internal mobility, finding mentors, planning teams. Never for ranking.
Living Context Cards
Auto-updating context per person, without scoring them. The card writes itself from data; humans don't have to update it.
Our wedge is being the helpful AI, not the surveillance AI.
The audience we want loves Linear, hates Workday. If a future investor or customer pushes us toward performance scoring, the answer is “no, here’s why, and here are the alternative features that achieve the underlying goal.”