Prototype Scope
Source
This is extracted from SOW 002 - AI Prototype Track, MIND 2 Platform Phase 0.
Overview
The AI Prototype workstream runs for 8 weeks as part of Phase 0, running in parallel with the Shared Foundations workstream.
Key Evaluation Objectives
| Objective | Question |
| AI Technical Maturity | Can AI deliver required accuracy and efficiency gains? |
| Business Process Change | Will editorial team adopt human-in-the-loop verification? |
| Delivery Approach | Can AI-augmented development reduce time and cost? |
| Benefits Realization | Can we validate targeted efficiency gains? |
Scope
In Scope
- Prototype Application - Working demonstration with price capture workflow, review UX, and assessment approval flows
- Voice Transcription - Technical domain optimization
- AI-Powered Price Assessment - Generate proposed prices with rationale
- Structured Data Capture - Workflow management
- Assessment AI Evaluation - Explore multiple AI approaches
Out of Scope (Prototype)
- Production deployment
- Full user management
- Complete data migration
- Performance optimization at scale
Technology Choices
Application Stack
| Component | Technology |
| Framework | Next.js / Node.js |
| Database | SQL Server (for Fastmarkets portability) |
| Authentication | Simple auth (for portability) |
| Deployment | AWS App Runner |
| CI/CD | GitHub Actions |
LLMs to Evaluate
| Provider | Models |
| OpenAI | GPT-5.x (preferred if viable) |
| Anthropic | Opus 4.5 / Sonnet 4.5 / Haiku 4.5 |
| Google | Gemini 3 Pro / 3 Flash |
Voice / Transcription
- AssemblyAI
- Whisper (OpenAI)
- Azure Cognitive Services Speech
- VAPI (STS use cases if required)
AI Frameworks
- LangChain
- LangGraph
- LangSmith / Logfire
Deliverables
Week 2 - Design
| Deliverable | Description |
| AI Prototype Overall Design | Architecture, AI evaluation framework, data models |
| Test Data Approach | How test data will be handled |
Week 4 - Test Data
| Deliverable | Description |
| Test Data Suite | Test data + test scripts |
| Data Generation Tools | Tools to generate/augment test data |
Week 6 - Interim Prototype
| Deliverable | Description |
| Functional Prototype | End-to-end workflow from capture to output |
| CI/CD Pipeline | Deployable with configuration management |
Week 7 - Deployment Package
| Deliverable | Description |
| Deployment Package | Repository + operational instructions |
| Operations Materials | Deployment guide, support materials |
Week 8 - Final Delivery
| Deliverable | Description |
| Final Prototype | Incorporating Week 7-8 refinements |
| Documentation | Runbooks, technical docs, deployment guides |
Timeline
| Week | Date (w/c) | Activity |
| 0 | 5th January | Onboarding, planning |
| 1 | 12th January | Project initiation |
| 2 | 19th January | Design + Test Data Approach |
| 4 | 2nd February | Test Data Suite |
| 6 | 16th February | Interim Prototype |
| 7 | 23rd February | Deployment Package |
| 8 | 2nd March | Final Prototype |
Team
Leadership
- Paul Scott - Overall project leadership, AI strategy, technology lead (Weeks 0-12)
Technical Team
- Senior Engineer - Core development, technical implementation (Weeks 1-8)
- Senior QA/Engineer - Quality assurance, testing, deployment support (Weeks 1-8)
Success Criteria
The prototype is successful when:
- [ ] All deliverables accepted by client
- [ ] Complete end-to-end workflow demonstrated
- [ ] Evaluation framework enables AI performance assessment
- [ ] Technical approach proven suitable for production
- [ ] Deployment materials provided and proven functional
Data Handling
All data and code treated as Fastmarkets confidential.
Test data sources:
- Synthetic representative data (created by Luminarium)
- Fastmarkets-provided anonymized data
Data Classification
Consult with Fastmarkets SMEs on test data to ensure suitable test cases.