This video walks through the core concepts for this module. Watch it first, then use the slides below to go deeper.
A Fort Worth contractor spent three weeks writing โ tight prose, strong past performance, competitive price. She lost. The winning team built a compliance matrix first, mapped every requirement to an evaluation criterion, and locked their win themes before anyone touched a keyboard. Same information. Different result. Proposals are won in planning, not in writing.
No contractor pours concrete without approved architectural drawings. The architect's job is roughly 80% planning and 20% construction oversight โ and the physical building goes up faster and more accurately because of it. If you start pouring concrete before the blueprints are done, you end up tearing out walls to make room for pipes that were never accounted for.
Proposal writing works the same way. The 40% planning phase is where you read and annotate the solicitation, build your compliance matrix, develop your win themes, create your proposal outline, and assign every section to a writer with a word count and due date. When you actually sit down to write, there is no blank page โ there's a blueprint. The writing goes fast because every decision has already been made.
10% planning, 80% writing, 10% reviewing. That's why they submit non-compliant proposals โ they're building without blueprints. They don't discover the page limit problem until page 22. They don't realize they missed an evaluation sub-element until after submission. The planning phase is where the win is built. Most people sprint through it to get to the part that feels like work.
Government evaluators are looking for two things: compliance (did you answer everything required, in the required format, within the required limits?) and merit (does your approach demonstrate you understand the problem and can solve it?). Concise, organized, criteria-mapped proposals score higher than long, padded ones that bury the key points. Evaluators are reading 10โ30 proposals. Make their job easy.
A win theme is a specific, evidence-backed reason why your company should win this particular contract. Not a generic capability claim โ a discriminator tied to this agency's specific problem.
Discriminator: What makes you different from the competition โ not just what you do well, but what you do better or differently than the other bidders.
Benefit: Why that difference matters to THIS specific agency's mission, problem, or priority โ not why it matters in general.
Proof: A specific, verifiable past performance data point or metric that makes the claim credible. Without proof, it's a claim. With proof, it's evidence.
"Our experienced team will deliver high-quality IT services on time and on budget."
Why it fails: every single competitor says this. "Experienced," "high-quality," "on time and on budget" are the default claims of every proposal ever submitted. There is no discriminator, no benefit to THIS agency, and no proof. An evaluator reads this and marks nothing.
"Our bilingual support team reduced ticket resolution times by 34% for HHSC's South Texas offices, where 67% of users are Spanish-speaking โ the same demographic as this requirement."
Why it works: Discriminator (bilingual team). Benefit tied specifically to THIS agency's user demographics. Proof (34% reduction, specific office, specific data). An evaluator reads this and marks it.
Mine three sources: (1) the SOW โ what does the agency say is most important? Underline every "critical," "essential," and "high-priority" in Section C. (2) Section M โ what are the evaluation sub-elements? The agency told you exactly what they're scoring. Build a win theme around each weighted sub-element. (3) Incumbent contract โ search USASpending for the current contract. What did the previous vendor do? Your win theme might be "we solve the problem the incumbent created."
The three-touch rule: Each win theme must appear in three places โ the executive summary (where evaluators form their first impression), the technical approach (where the theme is demonstrated), and the past performance (where the theme is proved). An evaluator reading your proposal should encounter the same core message at least three times. Repetition isn't redundancy in proposals โ it's reinforcement.
Proposal writing is where Alex earns its keep. Use it to develop win themes, review drafts, and sharpen your technical approach before you submit.
Specific volumes required will be in Section L of the solicitation โ follow that exactly. This is the standard structure for service contract RFPs.
Your plan for doing the work. Demonstrate understanding of the agency's specific challenges, explain your methodology with enough detail to be credible, qualifications of key personnel, management plan, and risk mitigation. Section headers should mirror Section M's evaluation factor names exactly โ if Section M says "Factor 1: Technical Understanding," your Volume I should have a section titled "Technical Understanding."
The most common mistake: describing company capabilities instead of a specific approach to this contract. Example โ "We have 15 years of IT service experience and a certified team." That's a capability claim. An evaluator can't score that. Instead: "For this requirement, we will deploy a 3-person onsite team with dedicated Tier 2 and Tier 3 escalation paths, reducing average incident resolution time from 4 hours to under 90 minutes by Q2." That's an approach. It's specific, it addresses the agency's problem, and it's scoreable.
3โ5 relevant projects, each documented with: client name, client POC name AND current phone number, contract number or identifier, dollar value, period of performance, specific deliverables, and a relevance statement explaining how this project is comparable to the current requirement. Agencies verify past performance references โ they call the POC. Use a POC who will answer and who will speak well of the work.
New contractors: commercial experience, subcontractor experience, and key personnel's prior employer experience all count โ but must be formatted identically to government references. Don't write "we did similar work as a subcontractor." Write it exactly like a prime contract reference, with all the same fields completed. The evaluator sees a well-documented past performance example, not a footnote.
Formatted exactly as the solicitation requires โ use the government's provided spreadsheet or price table if one is attached in Section J. Pricing is kept separate from the technical volume; evaluators often score them independently. Covered in depth in Module 12. The critical rule now: your price volume must be arithmetically correct before anything else. A pricing error that survives into submission can get you disqualified or trap you in a below-cost contract.
Required forms including SF-1449 (Solicitation/Contract/Order for Commercial Items โ the cover page of most commercial item solicitations) and SF-33 (Solicitation, Offer, and Award โ used for negotiated procurements). These are government-standard cover sheets where you certify your offer, acknowledge amendments, and confirm your representations. Agency-specific certifications, your subcontracting plan if required, and signed acknowledgment of all amendments go here. Missing any required form is an easy, preventable disqualifier โ add each one as a row in your compliance matrix.
Many RFPs require a standalone executive summary โ check Section L. If required, it's typically submitted as the first section of Volume I or as a separate volume. Even when not required, experienced proposal teams include one because evaluators often read it first, form an impression, and then look for evidence of that impression in the rest of the proposal.
The executive summary is not a table of contents. It's where you lead with your win themes โ the three-sentence version of why you should win โ and frame the evaluator's reading of everything that follows. Write it last. It should summarize what you proved, not introduce what you're about to prove.
Most proposals aren't lost on price. They're lost because the evaluator couldn't score them โ missing response to a specific criterion, buried win theme, correct answer to the wrong question, or strong content that exceeded the page limit and was never read. Every mistake below has cost contractors real contracts.
The evaluator already knows what the SOW says โ they wrote it. When you restate it, you spend your precious page limit telling the government what they already know and zero words proving you can do it. It's the written equivalent of answering "What are your qualifications?" with "The job requires someone with 5 years of experience." Technically accurate. Completely unscoreable.
"The contractor shall provide technical support for the agency's 1,400 employees, including Tier 1 helpdesk and Tier 2 escalation capability during business hours."
This is the SOW copied with minor rewording. The evaluator scores nothing here.
"We deploy a lead technician on-site for 48 hours before go-live to interview the 15 highest-volume ticket submitters and build a known-issues database. Day-one coverage: 2 Tier 1 agents 8amโ5pm on-site; 1 Tier 2 remote 7amโ7pm. Target: 80% of Tier 1 tickets resolved within 2 hours, tracked on a monthly COR dashboard."
Specific approach, credible methodology, scoreable metrics.
Section L typically contains language like: "Technical proposals exceeding 25 pages will not be evaluated beyond page 25." This is a contractual rule โ not a reader preference. The evaluator is required to stop. Your strongest argument on page 27 does not exist in the evaluation record. Think of it like a federal court filing: a judge's brief limit is 25 pages. File 30, and the clerk stamps pages 26โ30 "not considered." Nothing on those pages enters the record โ it's not the judge being unfair, it's the rule you agreed to by filing.
The fix is mechanical: before you write a single word, open Section L and find the page limit. Put it at the top of your outline document. Write to that number from the first draft. Cutting a bloated 30-page proposal down to 25 pages in the last 48 hours before submission is one of the most agonizing experiences in proposal writing โ and it is entirely preventable.
Wrong agency name and contract number are the obvious tells โ but the deeper problem is worse. Boilerplate was written to answer a different set of requirements. When you paste your cybersecurity methodology paragraph from a 2023 network defense proposal into your 2025 cloud migration RFP, your security approach almost certainly doesn't address the cloud-specific sub-elements the current evaluator is scoring. You've answered the right general topic in completely the wrong specific context. It's like a student answering Question 3 on an exam when the teacher asked Question 4: the answer might be technically correct โ but it earns no credit because it doesn't respond to what was actually asked.
The rule: every major section of your proposal must contain at least one reference that could only appear in this solicitation โ something specific to this agency's scope, this RFP's stated priorities, or this contract's particular performance requirements. If a paragraph could have appeared unchanged in your last proposal, rewrite it.
Amendments don't just move due dates. They frequently change evaluation criteria weights, add or remove technical requirements, revise page limits, and update price templates. If Amendment 2 reduced the Technical factor weight from 40% to 30% and added a new Past Performance sub-element โ and you didn't read it โ you've spent weeks optimizing your proposal against the wrong scoring model. Then you didn't acknowledge the amendment in your submission. That's an automatic non-responsive determination before a single evaluator reads a word of your technical volume.
Every time a new amendment is issued: add a row to your compliance matrix, check the acknowledgment box in Volume IV, and re-read Section M to see if the scoring criteria changed. The 20 minutes you spend on this is the cheapest insurance in proposal development.
"Extensive experience," "proven track record," "robust capabilities," "deep domain expertise" โ these phrases are completely unscoreable. Evaluators are required to justify their scores with specific evidence from the proposal text. When your proposal says "extensive experience," there is nothing for them to cite in their scoring rationale. The phrase cannot support a score of Outstanding or Good. It is not neutral filler โ it is wasted page space that signals to an experienced evaluator that you don't have real metrics to offer.
"Our team has extensive experience delivering IT services to federal agencies and a proven track record of on-time delivery."
"Our team has delivered 14 federal IT service contracts since 2018, averaging a 96% on-time delivery rate across 47,000 service tickets. Most recent CPARS rating: Exceptional."
You describe your methodology โ detailed, accurate, technically sound โ but you never explain what it means for the agency's specific program. "We will use Agile development methodology" is a description. It tells the evaluator how you work. It doesn't tell them what that choice means for their contract.
"We will use two-week Agile sprints โ which means the agency can redirect development priorities every 14 days without a contract modification or a scope change request, reducing administrative overhead compared to traditional Waterfall methods" is the so-what. Every methodological choice should connect to an explicit agency benefit. Evaluators score merit relative to mission impact. They are not required to infer the benefit of your approach. Make the connection explicit, every time, in every section.
Use section headers in your proposal that mirror the evaluation factor names in Section M exactly. If Section M says "Factor 1: Technical Understanding of Requirements" and "Sub-factor 1.1: Staffing Approach," your proposal should have sections with those exact names. The evaluator opens Section M, sees "Technical Understanding of Requirements," opens your proposal, and finds a section with that heading immediately. No hunting. No inferring. Direct mapping.
Professional proposal managers call this ghosting the evaluator โ you've essentially written the scoring roadmap inside the proposal itself. The evaluator's job is to mark where each factor is addressed. You've made that job trivial. Evaluators reward what makes their job easy.
Open Section M on one screen. Open your proposal on the other. For each evaluation sub-element listed in Section M, ask one question: "Where exactly do I address this? What page? What section header?" If you cannot point to it within 30 seconds, the evaluator cannot either. If they cannot find it, they cannot score it. If they cannot score it, it is worth zero โ regardless of how good the content actually is.
This test takes 20 minutes. It finds more proposal problems than any other review technique because it forces you to look at your proposal the way an evaluator does โ starting from the scoring rubric, not from page one. Run it every time, on every proposal, without exception.
Before submission, every proposal needs at least one review by someone who hasn't been writing it. The writer is too close to see what's missing โ and too close to catch what's wrong. Three reviews serve three different purposes. Timing matters as much as who does them.
Check every row of your compliance matrix against the final proposal document. Every requirement addressed, every form included, every page limit respected, every amendment acknowledged. Do this with the actual solicitation open beside the proposal โ not from memory. This is a mechanical check, not a quality check. The question is binary: is it there, or is it not?
Give the proposal to someone completely unfamiliar with the project โ no briefing, no context. If they need to ask "what do you mean here?" โ that section needs revision before submission. Evaluators are reading 10โ30 proposals. They will not ask for clarification. They will mark the section unclear and move on. The clarity reviewer's job is to find every sentence that requires prior knowledge of the project to understand.
The most valuable review in proposal development โ and the most commonly skipped. The red team reviewer receives only two documents: Section M (the evaluation criteria) and your proposal draft. No briefing. No explanation of what you meant. They score the proposal against each Section M factor as if they were a government evaluator. Every gap they find is a gap the actual evaluator will find. Every weak argument they identify is a weak argument your score will reflect.
Doing a red team requires enough time to actually revise โ that's why it happens 7โ10 days out, not the night before. Proposals that go through a genuine red team consistently outperform those that don't.
For the compliance review: ask your APEX counselor, a trusted colleague, or a proposal-savvy peer. Give them your compliance matrix and the final proposal. Ask them to check every row. They do not need to read the whole proposal โ just confirm each requirement is addressed where the matrix says it is.
For the red team: give your reviewer only Section M and the proposal draft. Explicitly tell them: "Do not ask me what I meant. Score each factor as a government evaluator would, and tell me where the score would be low and why." The debrief from a red team reviewer who was not briefed tells you exactly what the real evaluators will think โ because they had the same information gap.
You submitted. You have no idea what's happening. Nobody tells you anything. Here's exactly what's going on โ and how long it actually takes.
The contracting team confirms your submission is complete and compliant โ all required forms present, page limits respected, amendments acknowledged. This is the compliance gate. Non-responsive proposals are eliminated here before a single evaluator reads your technical content. It doesn't matter how good your approach is if you missed a required form.
Independent technical evaluators score each compliant proposal against the Section M criteria โ the exact criteria you mirrored in your proposal structure. Ratings assigned per factor: Outstanding, Good, Acceptable, Marginal, or Unacceptable. The cost/price analyst evaluates your price volume separately. Evaluators work independently until the panel debrief, so no single evaluator sees the full picture until scoring is complete.
The Source Selection Authority (SSA) โ often a senior agency official, not the CO โ reviews the panel's findings and makes the final trade-off decision. On best-value acquisitions, the highest technical score doesn't automatically win. If two proposals are close technically, the lower-priced one may take it. The award notice is posted publicly on SAM.gov. You may also receive a direct notification from the CO. If you don't win, you typically hear nothing at all unless you ask for a debrief.
Simplified acquisitions (under $250K): 2โ4 weeks from submission to award. Lighter evaluation process, faster internal approvals.
Competitive acquisitions ($250Kโ$5M): 6โ12 weeks typical. Multiple evaluators, more review layers.
Large/complex acquisitions ($5M+): 3โ6 months or longer. Senior source selection authority involvement, possible discussion rounds (where the government asks offerors to clarify or improve their proposals), and potential Best and Final Offer (BAFO) rounds can each add weeks.
None of these are guaranteed. Budget holds, internal staffing changes, and bid protests can push any of them out further. Set a calendar reminder to check SAM.gov for the award notice at the midpoint of the expected window โ then check weekly after that.
The most expensive mistake after submitting a proposal is stopping. Your pipeline doesn't wait for the government to make up its mind.
Keep searching SAM.gov daily โ new solicitations post constantly in your NAICS codes.
Research the agency on USASpending.gov โ look at who won similar contracts in the past 2โ3 years. By the time you get to the debrief, you'll already know your competition.
Write down what you'd improve on the proposal you just submitted โ while it's fresh. Every submission is a lesson. Capture it before the next one starts.
One bid is not a strategy. Twelve to fifteen proposals a year is a strategy. The contractors who win reliably aren't better writers โ they're more consistent submitters.
| Term | Definition |
|---|---|
| Win Theme | A specific, evidence-backed reason why your company should win this particular contract โ discriminator + benefit + proof; woven throughout the entire proposal |
| Technical Volume | The section explaining your approach, methodology, key personnel, and management plan โ evaluated against Section M's technical factors |
| Past Performance Volume | 3โ5 relevant prior projects with client contacts, contract values, and specific deliverables demonstrating capability to perform the current work |
| Best Value | Evaluation methodology where the government doesn't necessarily award to the lowest price โ selects the best combination of technical merit, past performance, and price |
| Ghosting / Section M Mirroring | Structuring proposal section headers to mirror the exact evaluation factor names in Section M โ so evaluators can navigate directly from the scoring rubric to the corresponding proposal section without hunting |
| Red Team Review | A final proposal review โ done 7โ10 days before submission โ where an independent reviewer scores the proposal against Section M criteria as if they were a government evaluator, identifying gaps, weak arguments, and compliance issues before the actual evaluation |
| Loaded Labor Rate | The fully burdened cost of putting one employee on contract: direct wages plus fringe benefits (health insurance, payroll taxes, retirement โ typically 25โ40% of direct wages) plus overhead, G&A, and fee/profit. What you actually bill the government per hour. Miscalculating this rate means your entire price volume is wrong. |
| Executive Summary | A 1โ2 page standalone section that leads with your win themes and frames the evaluator's reading of the full proposal. Written last (after the rest is complete). Check Section L โ it may be required. Even when optional, experienced proposal teams include one because evaluators often form their initial impression here before reading the technical volume. |
| Non-Responsive | A proposal automatically eliminated before quality evaluation because it failed to meet a mandatory solicitation requirement โ such as missing a required form, exceeding a page limit, or failing to acknowledge an amendment |
A real scenario from the field. No answer permanently locks you out โ but the consequences below are real. Choose one, then see what unfolds.
A loaded labor rate is the fully burdened cost of putting one employee to work on a government contract. It's not what the employee earns โ it's what it actually costs you to have them working, hour by hour, billed to the government.
Think of it like a restaurant calculating the price of a dish. The ingredient cost (what the employee takes home as direct wages โ say $45/hour) is only the starting point. The restaurant also has to cover the cook's fringe benefits, the kitchen rent, the manager's salary, the utility bill, and then add a profit margin. Only then do you get the price on the menu. Same structure for government contracts.
The layers of a loaded rate:
โ Direct wages โ what the employee earns per hour ($45)
โ Fringe benefits โ health insurance, payroll taxes (FICA, FUTA), 401k match, paid leave โ typically 25โ40% of direct wages (adds ~$14โ18/hour)
โ Overhead โ facilities, equipment, managers allocated to this contract
โ G&A (General & Administrative) โ corporate costs spread across all contracts
โ Fee/profit โ what the company actually earns
When all layers are stacked correctly, $45/hour in direct wages might become $85โ$95/hour billed to the government. If you forget to include FICA contributions or miscalculate the fringe benefit rate, every labor line in your price table is understated โ and the total contract price you bid is wrong.
You're 4 days from proposal submission. Your technical volume is complete and solid. Your price volume has a calculation error in the labor rate table โ loaded rates don't correctly include all fringe benefits. Fixing it will take 8 hours you planned to spend polishing the technical volume.
Make a choice above, then continue to the knowledge check.
Three quick questions to lock in what you just learned. Click any answer โ right or wrong, you'll see the full explanation. The goal is retrieval, not a grade.
Module 12 covers how to price to win โ understanding cost-plus vs. fixed price, building a realistic cost model, and pricing your proposal to beat the competition without losing money on the contract.
AI-generated proposal sections tailored to your opportunity โ Executive Summary, Technical Approach, Management Plan, Staffing, Past Performance, and Pricing. Export to DOCX in one click.
Module 12 covers how to price to win โ cost-plus vs. fixed price, building a realistic cost model that protects your margin, and pricing strategies that beat the competition without losing money on performance.
Module 12: Pricing to Win โ