Work Breakdown Structure
The project hierarchy flows from top to bottom: Project → Epics → Stories → Tasks. Each node references its parent via parent_id, forming a tree:
work_items {
item_id UUID PK
project_id UUID FK
parent_id UUID FK nullable
type ENUM(EPIC, STORY, TASK, MILESTONE)
title VARCHAR(255)
story_points INT nullable
pct_complete FLOAT DEFAULT 0
start_date DATE nullable
end_date DATE nullable
duration_days INT nullable
}
Story points and completion percentage roll up from children to parents. When a child task is updated, the parent's pct_complete is recalculated as the weighted average of its children's completion values.
Task Scheduling Model
Each work item carries scheduling attributes used to place it on the Gantt chart:
- start_date / end_date: planned dates, either manually set or computed from dependencies
- duration_days: working days, used when only start_date is known
- dependencies[]: list of predecessor item_ids with dependency type
- resource_assignments[]: list of {person_id, hours_per_day}
Gantt Chart Data Model
The Gantt API returns tasks as time-positioned bars with dependency arrows. The response shape:
{
"item_id": "...",
"title": "...",
"start_date": "2024-03-01",
"end_date": "2024-03-10",
"pct_complete": 40,
"is_critical": true,
"dependencies": [{"predecessor_id": "...", "type": "FS"}]
}
The is_critical flag is set by the critical path algorithm and drives the red highlighting in the UI.
Critical Path Method (CPM)
CPM runs in two passes over the dependency-ordered task graph:
- Forward pass: for each task in topological order, compute Earliest Start (ES) = max(predecessor Earliest Finish); Earliest Finish (EF) = ES + duration
- Backward pass: starting from the project end date, compute Latest Finish (LF) = min(successor Latest Start); Latest Start (LS) = LF – duration
- Float (slack): LS – ES. Tasks with float = 0 are on the critical path — any delay to them delays the project end date
CPM is recalculated whenever tasks, durations, or dependencies change. For large projects, recalculation runs as a background job triggered by a change event.
Dependency Types
Four standard dependency types are supported, matching MS Project conventions:
- Finish-to-Start (FS): B cannot start until A finishes (most common)
- Start-to-Start (SS): B cannot start until A starts
- Finish-to-Finish (FF): B cannot finish until A finishes
- Start-to-Finish (SF): B cannot finish until A starts
Each dependency edge also supports a lag (positive delay) or lead (negative, overlap) value in days, stored as an integer offset applied during CPM calculations.
Resource Allocation and Overload Detection
Resource assignments link people to tasks with a daily hour commitment:
resource_assignments {
item_id UUID FK
person_id UUID FK
hours_per_day FLOAT
}
Overload detection aggregates daily allocated hours per person across all their active tasks:
SELECT person_id, date, SUM(hours_per_day) as total_hours
FROM resource_assignments
JOIN task_calendar ON item_id = task_id AND date BETWEEN start_date AND end_date
GROUP BY person_id, date
HAVING SUM(hours_per_day) > 8
Overallocated days are surfaced in the resource view as red bars. Project managers can drill into which tasks are competing for the same person on a given day.
Resource Leveling
When overallocation is detected, the system can suggest or automatically apply resource leveling: tasks with positive float (non-critical) are shifted later within their float window to reduce peak resource usage. The algorithm prioritizes tasks by priority and float, delaying the least-critical tasks first.
Milestone Tracking
Milestones are zero-duration tasks (MILESTONE type) that mark significant project checkpoints. They appear as diamond shapes on the Gantt chart. A burn-up chart tracks milestone completion over time, plotting planned vs. actual milestone delivery dates.
Baseline and Variance Tracking
A baseline is a snapshot of the plan taken at project kickoff or at any anchor point. Stored as immutable copies of task dates and estimates:
baselines { baseline_id, project_id, captured_at, captured_by }
baseline_items { baseline_id, item_id, planned_start, planned_end, planned_hours }
Variance reports compare current values to baseline: schedule variance (days late/early), cost variance (actual vs. planned hours). This surfaces scope creep and schedule drift early.
Progress Tracking and Change Management
Team members enter percentage complete on their tasks. This rolls up through the WBS to epics and the project summary bar. When a scope change is proposed — adding tasks, changing durations — the system previews the impact on the critical path and project end date before the change is approved, giving the PM a before/after comparison to present to stakeholders.
{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “How do you represent a Gantt chart data model and compute task bar positions efficiently?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Store each task with start_date, end_date (or duration), dependencies (a list of predecessor task IDs), assignee_ids, and percent_complete. Gantt rendering is a client-side concern: given the project's time range, map each task's dates to pixel offsets using a linear scale. Dependency lines are drawn as edges in a DAG. To compute positions efficiently, topologically sort the task DAG server-side and return tasks in dependency order so the client can render them in a single pass. Store the sort order in the database to avoid re-computing it on every fetch; invalidate it when the dependency graph changes.”
}
},
{
“@type”: “Question”,
“name”: “How would you implement resource allocation tracking to detect when a team member is over-allocated across concurrent tasks?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Model each task assignment with a capacity fraction (e.g., 0.5 = 50% of working hours). For each resource, compute daily or weekly load by summing the capacity fractions of all active tasks whose date ranges overlap that period. Store a materialized resource_load table keyed by (user_id, date) updated via triggers or an event-driven worker whenever assignments change. Over-allocation is detected by querying for rows where daily_load > 1.0. Surface warnings on the UI in real time by querying this table on assignment save. For large teams, compute load in a background job and cache results with a short TTL.”
}
},
{
“@type”: “Question”,
“name”: “How do you calculate the critical path in a project task graph?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Model tasks and dependencies as a weighted DAG where edge weights equal task durations. Run the Critical Path Method (CPM): perform a forward pass (topological order) to compute the earliest start (ES) and earliest finish (EF) for each task — ES of a task equals the maximum EF of its predecessors. Then run a backward pass to compute latest start (LS) and latest finish (LF) starting from the project end date. Float (slack) for each task equals LS – ES. Tasks with zero float form the critical path. Implement this server-side in O(V+E) time. Re-run whenever task durations or dependencies change and cache the result.”
}
},
{
“@type”: “Question”,
“name”: “How do you handle concurrent updates to the same project task — for example, two managers editing duration simultaneously?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Use optimistic concurrency control. Include a version (integer or updated_at timestamp) field on every task row. When a client fetches a task, it receives the current version. On save, the UPDATE includes a WHERE version = :seen_version clause. If another writer committed in between, the row version will have incremented and the UPDATE affects 0 rows — return a 409 Conflict with the current server state so the client can merge or show a conflict UI. For collaborative real-time editing (multiple cursors), use operational transforms or CRDTs with a WebSocket channel, broadcasting deltas so all clients converge to the same state without full rewrites.”
}
}
]
}
See also: Atlassian Interview Guide
See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering