Feedback & Satisfaction Survey System
AI-powered feedback collection widget and periodic satisfaction survey with reporting API, designed to capture structured user feedback and quantify productivity savings.
Stack / Entry Points
- Backend: FastAPI router (
src/api/routes/feedback.py), feedback service (src/api/services/feedback_service.py), SQLAlchemy models (src/database/models/feedback.py) - Frontend: React components in
frontend/src/components/Feedback/, survey trigger hook (frontend/src/hooks/useSurveyTrigger.ts) - LLM:
ChatDatabricksviadatabricks_langchain— defaults todatabricks-gemma-3-12b, overridable withFEEDBACK_LLM_ENDPOINTenv var - Storage: PostgreSQL tables
feedback_conversationsandsurvey_responses - Schemas:
src/api/schemas/feedback.py(Pydantic request/response validation)
Architecture Snapshot
┌─────────────────────────────────────────────────────────────────┐
│ Frontend (AppLayout.tsx) │
│ │
│ FeedbackButton ──► FeedbackPopover (chat with AI) │
│ │ │ │
│ │ POST /api/feedback/chat (stateless, full │
│ │ history sent each call) │
│ │ │ │
│ │ POST /api/feedback/submit (confirmed summary) │
│ │ │
│ useSurveyTrigger ──► SurveyModal (stars + time + NPS) │
│ │ │ │
│ │ POST /api/feedback/survey │
│ │
│ Admin / reporting: │
│ GET /api/feedback/report/stats (SQL aggregation) │
│ GET /api/feedback/report/summary (LLM-generated) │
└─────────────────────────────────────────────────────────────────┘
│ │
▼ ▼
FeedbackService PostgreSQL
(LLM chat + DB ops) feedback_conversations
survey_responses
Key Concepts / Data Contracts
Feedback Chat (Request / Response)
// POST /api/feedback/chat
// Request
{ "messages": [{ "role": "user", "content": "The text is hard to read" }] }
// Response
{ "content": "Can you tell me which slide styles are affected?", "summary_ready": false }
The frontend sends the full conversation history each call (stateless on the server). When the AI produces a structured **Summary** block, summary_ready becomes true.
Feedback Submit
// POST /api/feedback/submit
{
"category": "Bug Report",
"summary": "Text unreadable on dark backgrounds",
"severity": "High",
"raw_conversation": [
{ "role": "user", "content": "..." },
{ "role": "assistant", "content": "..." }
]
}
Survey Submit
// POST /api/feedback/survey
{
"star_rating": 4,
"time_saved_minutes": 120,
"nps_score": 8
}
star_rating is required (1-5). time_saved_minutes (15/30/60/120/240/480) and nps_score (0-10) are optional.
Feedback Categories & Severities
| Categories | Severities |
|---|---|
| Bug Report, Feature Request, UX Issue, Performance, Content Quality, Other | Low, Medium, High |
Component Responsibilities
Backend
| File | Responsibility |
|---|---|
src/api/routes/feedback.py | 5 endpoints: chat, submit, survey, report/stats, report/summary |
src/api/services/feedback_service.py | LLM chat, DB persistence, stats aggregation, AI summary generation |
src/api/schemas/feedback.py | Pydantic validation for all request/response types |
src/database/models/feedback.py | FeedbackConversation and SurveyResponse SQLAlchemy models |
Frontend
| File | Responsibility |
|---|---|
frontend/src/components/Feedback/FeedbackButton.tsx | Floating icon button (fixed bottom-right, z-60) |
frontend/src/components/Feedback/FeedbackPopover.tsx | Chat UI: messages, input, summary detection, submit/correction flow |
frontend/src/components/Feedback/SurveyModal.tsx | Star rating + time saved + NPS modal |
frontend/src/components/Feedback/StarRating.tsx | 5-star interactive rating |
frontend/src/components/Feedback/NPSScale.tsx | 0-10 numbered button row |
frontend/src/components/Feedback/TimeSavedPills.tsx | Pill buttons: 15min, 30min, 1hr, 2hrs, 4hrs, 8hrs |
frontend/src/components/Feedback/FeedbackDashboard.tsx | Hidden /admin page (Feedback tab) — stats table, totals, AI summary (no nav link) |
frontend/src/hooks/useSurveyTrigger.ts | 30s post-generation timer with 7-day localStorage cooldown |
frontend/src/services/api.ts | feedbackChat(), submitFeedback(), submitSurvey(), getReportStats(), getReportSummary() |
Data Flow
Feedback Widget Flow
- User clicks the floating feedback button (bottom-right corner).
FeedbackPopoveropens with a greeting message.- User types feedback. Frontend sends full conversation to
POST /api/feedback/chat. - Backend prepends the system prompt and calls
ChatDatabricks(default:databricks-gemma-3-12b). - AI asks up to 2 clarifying questions, then produces a structured
**Summary**block. - Frontend detects
summary_ready: trueand displays a "Submit Feedback" button with an optional correction text box. - User clicks Submit →
POST /api/feedback/submitstores the summary + raw conversation infeedback_conversations. - "Thank you" message → popover closes after 2 seconds.
Survey Flow
- User generates a presentation successfully.
useSurveyTriggercheckslocalStoragekeytellr_survey_last_shown.- If eligible (no survey in last 7 days), starts a 30-second timer.
- If the user starts another generation during the 30s, the timer resets.
- After 30s idle → survey modal appears; timestamp written to
localStorageimmediately. - User fills in star rating (required), time saved, and NPS (optional), then clicks Submit.
POST /api/feedback/surveystores the response insurvey_responses.- Dismissing (X button) without submitting still counts as "shown" for the 7-day cooldown.
Feedback Dashboard (Hidden Page)
The feedback dashboard is rendered on the /admin page as a tab, sharing the layout with Google Slides configuration. The /feedback route redirects to /admin. The admin page is not linked from the navigation bar — access it by typing the URL directly (e.g. https://<host>/admin).
The Feedback tab shows:
- Summary cards — overall average star rating, NPS, total time saved, total responses.
- Weekly stats table — one row per week with response count, avg stars, avg NPS, and time saved.
- AI-generated summary — narrative analysis of feedback themes, category breakdown, and top themes.
Both the stats and summary sections have a configurable week-range selector.
API Table
| Method | Path | Purpose |
|---|---|---|
POST | /api/feedback/chat | Send feedback conversation message, get AI response |
POST | /api/feedback/submit | Submit confirmed feedback (summary + raw conversation) |
POST | /api/feedback/survey | Submit satisfaction survey response |
GET | /api/feedback/report/stats?weeks=12 | Weekly aggregated stats (star avg, NPS avg, time saved sum) |
GET | /api/feedback/report/summary?weeks=4 | AI-generated narrative summary of feedback themes |
Database Tables
feedback_conversations
| Column | Type | Constraints |
|---|---|---|
id | INTEGER (PK) | Auto-increment |
category | VARCHAR(50) | One of 6 preset categories |
summary | TEXT | AI-generated summary |
severity | VARCHAR(10) | Low / Medium / High |
raw_conversation | JSON | Full message array |
created_at | TIMESTAMP | Default: utcnow() |
survey_responses
| Column | Type | Constraints |
|---|---|---|
id | INTEGER (PK) | Auto-increment |
star_rating | INTEGER | 1-5, required |
time_saved_minutes | INTEGER | 15/30/60/120/240/480 or NULL |
nps_score | INTEGER | 0-10 or NULL |
created_at | TIMESTAMP | Default: utcnow() |
Both tables are anonymous — no user identity columns.
Operational Notes
LLM Configuration
The feedback chat uses a separate LLM endpoint from slide generation to keep responses fast:
| Setting | Value |
|---|---|
| Default endpoint | databricks-gemma-3-12b |
| Override | FEEDBACK_LLM_ENDPOINT env var |
| Temperature | 0.3 (chat), 0.2 (report summary) |
| Max tokens | 500 (chat), 800 (report summary) |
Summary Detection
The backend detects when the AI has produced a summary by checking for the **Summary** marker in the response content. This avoids special sentinel tokens — the AI's structured output is both the detection mechanism and the displayed content.
Error Handling
- LLM endpoint not configured → 503 Service Unavailable
- LLM call fails → 500 with logged error
- Invalid request payload → 422 Validation Error (Pydantic)
- Survey/feedback DB write fails → 500 with logged error
Testing
| Test file | Coverage |
|---|---|
tests/unit/test_feedback_models.py | Model creation, constraints (10 tests) |
tests/unit/test_feedback_schemas.py | Pydantic validation (14 tests) |
tests/unit/test_feedback_service.py | LLM mocking, DB ops, stats, summaries (10 tests) |
tests/unit/test_feedback_routes.py | Endpoint request/response (6 tests) |
Extension Guidance
- Add a feedback category: Update
FEEDBACK_CATEGORIESinsrc/api/schemas/feedback.pyand theCheckConstraintinsrc/database/models/feedback.py. The system prompt infeedback_service.pyalso lists categories. - Change time-saved options: Update
TIME_OPTIONSinTimeSavedPills.tsx,ALLOWED_TIME_SAVEDinsrc/api/schemas/feedback.py, and theCheckConstraintin the model. - Add user identity: Add a
user_idoremailcolumn to both tables, pass from frontend (would need an auth system first). - Export feedback data: The
/api/feedback/report/statsendpoint returns JSON suitable for dashboards. For raw export, query the tables directly or add a CSV export endpoint. - Adjust survey timing: Change
DELAY_MS(post-generation wait) andCOOLDOWN_MS(minimum between surveys) infrontend/src/hooks/useSurveyTrigger.ts.
Cross-References
- Backend Overview — FastAPI router registration, service patterns
- Frontend Overview — React context providers, component layout
- Real-Time Streaming — How
onSlidesGeneratedcallback works (survey trigger hook) - Database Configuration — PostgreSQL setup,
Base.metadata.create_all()