
    GRUMMAN DATA SYSTEMS CORPORATION, Appellant, v. John H. DALTON, Secretary of the Navy, Appellee, and Intergraph Corporation, Intervenor.
    No. 95-1214.
    United States Court of Appeals, Federal Circuit.
    June 28, 1996.
    
      William W. Thompson, Jr., Thompson & Waldron, Alexandria, Virginia, argued for appellant. With him on the brief were Michael A. Branca, and John H. Tracy.
    Arnold M. Auerhan, Attorney, Commercial Litigation Branch, Civil Division, Department of Justice, Washington, D.C., argued for appellee. With him on the brief were Frank W. Hunger, Assistant Attorney General, David M. Cohen, Director, and Kirk T. Manhardt, Assistant Director. Of counsel were Thomas L. Frankfurt and Lis B. Young, Office of Counsel, Naval Information Systems Management Center, Department of the Navy, Washington, D.C.
    Rand L. Allen, Wiley, Rein, & Fielding, Washington, D.C., for intervenor. With him on the brief was Christine E. Connelly.
    Before RICH, RADER and SCHALL, Circuit Judges.
   SCHALL, Circuit Judge.

Grumman Data Systems Corporation (“Grumman”) appeals from the October 27, 1994 decision of the General Services Administration Board of Contract Appeals (“Board”) in Grumman Data Systems Corp. v. Department of the Navy. In that decision, the Board denied Grumman’s protest of the award of an automatic data processing equipment supply contract to Intergraph Corporation (“Intergraph”) by the Department of the Navy (the “agency”). We affirm.

BACKGROUND

At issue in this ease is a best value procurement authorized by 48 C.F.R. § 15.605(c) (1994) (“[I]n certain acquisitions the Government may select the source whose proposal offers the greatest value to the Government in terms of performance and other factors”) On July 15, 1991, the agency issued Request for Proposals No. N66032-91-R-0006 (the “Solicitation”). The Solicitation sought proposals for supplying computer-aided design, manufacturing, and engineering technology to the agency. The Solicitation provided that a supply contract would be awarded to the “Offeror [that] provides the greatest value, price and other factors considered.” The Solicitation further provided that the agency would “balance the technical differences with the proposed overall cost to determine the best value to the [agency].” The Solicitation set forth a variety of requirements, including four-thousand (4000) “minimum mandatory requirements” (“MMRs”), and provided that an offeror’s failure to meet the MMRs would eliminate the offeror from further consideration.

The Solicitation called for a Source Selection Evaluation Board (the “SSEB”), a Source Selection Advisory Council (the “SSAC”), and a Source Selection Authority (the “SSA”). The SSEB and SSAC were to consider the proposals submitted in response to the Solicitation and were to make recommendations to the SSA. The SSA was to serve as the ultimate decision-makei’, deciding which offeror would receive the contract.

Grumman, Intergraph, and two other companies submitted proposals in response to the Solicitation. The SSEB evaluated each of the proposals for technical merit and price. In evaluating the technical merit of the proposals, the SSEB conducted a five-day test of each offeror’s equipment. The test, known as an Operational Capability Benchmark Demonstration (“OCBD”), examined approximately one-fourth of the 4000 MMRs for each offeror’s equipment. In addition to conducting the OCBD, the SSEB analyzed the offerors’ written technical proposals by comparing the proposals with the requirements of the Specification; it also assessed the of-ferors’ management proposals. Through its evaluation of the technical merit of the proposals, the SSEB concluded that all four offerors met the MMRs. Based upon its evaluation, the SSEB assigned an overall technical merit score to each offeror. In the SSEB’s report to the SSAC, Intergraph received the highest overall technical merit score, while Grumman received the third-highest overall technical merit score. The SSEB’s report also provided the results of its price evaluation. The SSEB found that In-tergraph offered the lowest price, and that Grumman offered the second-lowest price. In sum, the SSEB concluded that, of the four proposals, Intergraph’s proposal offered the highest technical merit rating and the lowest price.

After reviewing the SSEB’s report and receiving a briefing from the SSEB, the SSAC — although it was not required by the Solicitation to do so — commissioned an Impact Analysis Working Group (“IAWG”) to compare technical differences among the proposals. The IAWG identified sixteen areas of technical differences (called “discriminators”), and divided the sixteen areas into two categories: quantifiable and non-quantifiable. The IAWG identified and analyzed four quantifiable discriminators. The IAWG based its analysis on tasks that were likely to be commonly undertaken by the equipment’s users, and on how much time generally was spent on those tasks. Using this information, and federal government pay-rates, the IAWG estimated the cost to the agency to use each offeror’s proposal, and then assigned a value to the government for each proposal. The IAWG reported that, over the period of the contract, the agency would save $97 million if it selected Grumman’s proposal instead of Intergraph’s proposal.

After reviewing the IAWG’s report, the SSAC stated that it was “not convinced that there was a reasoned argument for the IAWG quantification.” The SSAC explained that “the area of non-quantified discriminators needs to be also considered.” The SSAC directed the IAWG to provide a supplemental analysis.

After conducting further studies, the IAWG reported to the SSAC that Grumman was the “overall best solution in the mechanical area ... [,] followed very closely by [In-tergraph].” The IAWG also reported that the agency would save between $98 million and $242 million over the period of the contract if it selected Grumman’s proposal instead of Intergraph’s proposal. The IAWG explained that the great range of its savings estimate was due to variations given to “certain underlying assumptions.”

The SSAC examined the SSEB report and the IAWG report, and rejected the IAWG’s quantified value determinations. The SSAC stated that it “did not feel that the quantified value determinations of the IAWG were sufficiently compelling that value would materialize.” The SSAC gave four reasons for that conclusion. The first reason that the SSAC gave was that, in the IAWG exercise, only one operator completed the model exercise for each offeror’s equipment. The SSAC explained that it would “have more confidence in the results if several operators had completed the model exercise, or if it were feasible to obtain [o]fferor input regarding the model exercise.” The SSAC’s second reason was that the software version used for part of the IAWG’s analysis was different from that used in the OCBD. The third reason given by the SSAC was that the task performed for the IAWG “process study” was “relatively simple, and a more comprehensive exercise would more fairly differentiate the offerors.” Fourth, the SSAC explained, the IAWG’s quantification process study for two of the discriminators “exercised” an application that represented just one-third of the total workload identified in the Solicitation.

Finally, in conducting a price/technical trade-off analysis, the SSAC reviewed both Grumman’s and Intergraph’s proposals. After doing so, the SSAC concluded that “both Intergraph and Grumman’s technical proposals offered potential benefits and values.” However, “lacking a high confidence of realizing such value,” the SSAC determined that “the Intergraph proposal, with its [much] lower price, offered the overall best value, cost and other factors considered, to the government.” Consequently, the SSAC recommended that the SSA award Intergraph the contract.

The SSA studied the recommendation of the SSAC, as well as the SSEB’s and the IAWG’s analyses. Having done so, he explained that he had “considered the quantification process study conducted by the IAWG and the resultant quantified value determinations.” The SSA stated: “[M]y confidence level in the reliability of the IAWG’s process study is such ... that I am not reasonably convinced that the projected dollar savings associated with the quantified value determinations would materialize or that the quantified value determinations are indicative of the differences in technical value of the proposals.” The SSA gave two reasons for this conclusion: (1) the IAWG’s quantification process study was “not sufficiently comprehensive,” in that only one software application was analyzed and only one simple task was performed for each offeror’s equipment; and (2) because the number of keystrokes was the basis for the entire quantification process study, the study was suspect because several variable factors could affect the keystrokes assessed for each offeror.

The SSA stated that all four offerors

submitted technically acceptable proposals. Both Intergraph ... and Grumman ..., based upon their overall Technical Merit evaluation and scores, and the impact analysis associated with the non-quantified discriminators, offered quality solutions using different technologies. I have looked at the technical merits of both Intergraph’s and Grumman’s proposals and have concluded both offer benefits. In weighing the benefits in each proposal, I am unable to conclude that the benefits from Grumman’s proposal are offset by the price premium.

Based upon this reasoning, the SSA authorized award of the contract to Intergraph.

In its protest before the Board, in which Intergraph intervened, Grumman alleged, inter alia, (1) that the SSA’s selection of Inter-graph’s proposal as the “best value” was unreasonable and irrational in that it discounted the recommendations of the IAWG; and (2) that Intergraph failed to meet some of the MMRs of the Solicitation.

In denying Grumman’s protest, the Board reasoned that

[t]he SSA, although mindful of [the] IAWG’s recommendation, has selected In-tergraph as the high technical/low cost of-feror. This was a decision well within his authority to make, and it is consonant with the terms of the solicitation. Protestor challenges that selection but has the burden of proving it wrong. The most generous view of its effort to do so is that it has achieved a tie. But the Navy wins all ties, because a tie is less than a preponderance of the evidence, which is what the protestor needs to succeed.

The Board also rejected Grumman’s arguments regarding the MMRs, and denied the protest.

DISCUSSION

We must uphold the Board’s decision “on any question of fact ... unless the decision is fraudulent, or arbitrary, or capricious, or so grossly erroneous as to necessarily imply bad faith, or if such decision is not supported by substantial evidence.” 41 U.S.C. § 609(b) (1994); see Grumman Data Sys. Corp. v. Widnall, 15 F.3d 1044, 1046 (Fed.Cir.1994). Substantial evidence is “such relevant evidence as a reasonable mind might accept as adequate to support a conclusion.” Frank v. Department of Transportation, 35 F.3d 1554, 1556 (Fed.Cir.1994). We review the Board’s decisions on questions of law de novo. Caldwell & Santmyer, Inc. v. Glickman, 55 F.3d 1578, 1581 (Fed.Cir.1995). Because the contract at issue involves the procurement of automatic data processing equipment, the Board’s review of the agency’s decision to award the contract to Intergraph was governed by the Brooks Act, as amended, 40 U.S.C. § 759(f)(5)(B) (1994):

If the board determines that a challenged agency action violates a statute or regulation or the conditions of any delegation of procurement authority issued pursuant to this section, the board may suspend, revoke, or revise the procurement authority of the Administrator or the Administrator’s delegation of procurement authority applicable to the challenged procurement.

I.

Grumman’s first argument on appeal is that the Board erred in upholding the SSA’s finding that Intergraph’s proposal provided the “greatest value to the Government, price and other factors considered.” In making this argument, Grumman contends that the SSA’s rejection of the IAWG’s conclusion that Grumman’s proposal would cost the agency less than Intergraph’s proposal was unlawful and caused the SSA’s decision to deviate from the Solicitation’s “best value” requirement.

This court, as well as the Board, must afford great deference to agencies’ decisions in relation to procurement. See Lockheed Missiles & Space Co. v. Bentsen, 4 F.3d 955, 958-59 (Fed.Cir.1993) (“Effective contracting demands broad discretion.”); Tidewater Management Servs., Inc. v. United States, 216 Ct.Cl. 69, 573 F.2d 65, 73 (1978) (agencies “are entrusted with a good deal of discretion in determining which bid is the most advantageous to the Government”). More particularly, what we stated recently in Widnall v. BSH Corp., 75 F.3d 1577 (Fed.Cir.1996), applies to this case:

This case must be viewed in light of the many previous cases under [the Competition in Contracting Act of 1984 (“CICA”) ] in which the Board has reviewed an agency’s best value choice. Precedent dictates that the Board’s task on review is to determine if an agency’s procurement decision is grounded in reason. Once the Board determines that the agency’s selection is so grounded, it then defers to the agency’s decision even if the Board itself might have chosen a different proposal.

75 F.3d at 1580 (citing Oakcreek Funding Corp., GSBCA No. 11244-P, 91-3 B.C.A. ¶ 24,200, at 121,041 [1991 WL 133389]). Grumman’s attack on the SSA’s best value determination cannot succeed under the grounded-in-reason standard of review.

As mentioned above, the SSA’s first reason for rejecting the IAWG’s conclusions was that the IAWG’s quantification analysis was “not sufficiently comprehensive.” Grumman asserts that the SSA erred because he rejected the best quantifiable information available to him. According to Grumman, all four of the sixteen discriminators that could be quantified were quantified by the IAWG. Grumman’s argument is flawed. The SSA did not reject the IAWG study because he did not have the “best quantifiable information.” Rather, he rejected it because the information he had was “not sufficiently comprehensive.” The record supports the SSA’s conclusion in this regard. Grumman does not dispute the SSA’s finding that the IAWG analyzed only one software application for each offeror and that only one simple task was performed for each application. The SSA’s decision to reject the IAWG’s conclusions because of the highly-limited scope of the IAWG’s test was grounded in reason.

II.

Grumman’s second argument on appeal involves one of the MMRs. Section Cll.3.1.1 of the Solicitation stated that, in the system to be provided, the Hardware/Software Simulation Accelerator is to “act as an element of the entire system.” The section further stated that the Accelerator is to “support simultaneous multi-level modeling (switch, gate, behavioral, VHDL, Hardware)” and that this “tool set shall simulate at least one (1) million logic primitives, and perform at least (1) million evaluations per second.” Grumman argues that section Cll.3.1.1 requires that the Accelerator be able to carry out a complete simulation algorithm at least one million times per second (in other words, perform tasks A, B, C, and D). Grumman further argues that Intergraph’s proposal does not meet this MMR, and that its failure to meet the requirement rendered Inter-graph ineligible for the award of the contract. The agency and Intergraph, on the other hand, interpret section Cll.3.1.1 to mean that the Accelerator simply has to be able to record at least one million changes per second to the device input (in other words, perform just task A). Neither the agency nor Intergraph, however, challenges Grumman’s statement that Intergraph’s proposal does not meet the requirements of the section as interpreted by Grumman.

The Board concluded that both interpretations were “plausible” and held that it did “not have to resolve the conflict.” The Board explained that “[i]n this circumstance protestor has not shown by a preponderance of the evidence that Intergraph and the Navy are wrong in their interpretation, and thus protestor cannot prevail.”

Grumman makes three arguments on this point: (1) that section Cll.3.1.1 is unambiguous and must be interpreted as Grumman asserted before the Board; (2) that if the section is ambiguous, the ambiguity is latent and therefore the doctrine of contra profer-entem requires that Grumman’s interpretation control; and (3) that even if the agency’s and Intergraph’s interpretation is followed, Intergraph’s proposal does not meet the requirements of the section.

A.

The interpretation of a contract provision is a question of law. Fortec Constructors v. United States, 760 F.2d 1288, 1291 (Fed.Cir.1985). Whether a contract provision is ambiguous also is a question of law. Community Heating & Plumbing Co. v. Kelso, 987 F.2d 1575, 1579 (Fed.Cir.1993). A contract term is unambiguous when there is only one reasonable interpretation. C. Sanchez & Son, Inc. v. United States, 6 F.3d 1539, 1544 (Fed.Cir.1993). If more than one meaning is reasonably consistent with the contract language, then the contract term is ambiguous. Id. As just noted, Grumman argues that the “one million evaluations per second” requirement in section Cll.3.1.1 of the Solicitation is clear and unambiguous and must be construed to have the meaning it urges. The agency and Intergraph take the position that the provision is ambiguous.

In support of its argument that section Cll.3.1.1 is unambiguous, Grumman points to a statement in an Intergraph user’s guide that “an evaluation is a calculation of a signal’s future state based on the status of the driving device’s inputs.” The agency, on the other hand, cites certain testimony of the leader of the technical team that evaluated the offerors’ simulator solutions. That person stated that he had not heard of the term “evaluations per second” as a simulator performance measure prior to this procurement. The agency also points to testimony of Inter-graph’s expert witness that the term “evaluations” is rarely used because “it is very difficult without a lot of further discussion to get a handle on exactly what it is that you are trying to measure or trying to talk about.” That expert went on to say that one in the simulation business could not “get an immediate impression without some further discussion as to what would be meant by that term.” Moreover, as noted above, the Board found both interpretations “plausible.” While we review the Board’s conclusion as to a provision’s ambiguity de novo, we afford that conclusion “careful consideration and great respect.” Community Heating & Plumbing Co., 987 F.2d at 1579.

We agree with the Board’s conclusion that the contract term calling for simulators that could “simulate at least (1) million evaluations per second” has more than one reasonable interpretation. Thus, we find the term ambiguous as a matter of law.

B.

Once we conclude that the contract language about which the protestor complains is ambiguous, “we must then determine whether that ambiguity was patent so as to impose a duty to seek clarification, or only latent.” Interwest Constr. v. Brown, 29 F.3d 611, 614 (Fed.Cir.1994). Whether an ambiguity is patent or latent is a question of law. Id. This determination is made on a case-by-case basis. Interstate Gen. Gov’t Contractors, Inc. v. Stone, 980 F.2d 1433, 1435 (Fed.Cir.1992). Precedent, however, provides us with some guidance in making this determination. One of this court’s predecessors stated that a patent ambiguity is one that is “obvious, gross, [or] glaring.” H & M Moving, Inc. v. United States, 204 Ct.Cl. 696, 499 F.2d 660, 671 (1974). We have explained that “a patent ambiguity does not exist where the ambiguity is ‘neither glaring nor substantial nor patently obvious.’” Community Heating & Plumbing Co., 987 F.2d at 1579 (quoting Mountain Home Contractors v. United States, 425 F.2d 1260, 1264, 192 Ct.Cl. 16 (1970)).

Intergraph argues that the ambiguity is patent, and that Grumman’s protest regarding this issue should be rejected because of Grumman’s failure to seek clarification during the procurement process. Conversely, Grumman asserts that the ambiguity is latent.

As discussed above, one expert testified that the use of the term “evaluations per second” in relation to simulator performance is unheard of, while another stated that the term is rarely used because it is very vague and requires discussion before someone in the simulator field can understand what is meant by it. Moreover, at least one of the four offerors specifically inquired about the meaning of the provision. Grumman makes a bare statement that the term, if ambiguous, is a latent, and not a patent, ambiguity. Grumman, however, does not offer arguments, point to evidence in the record, or elaborate in any way on its position in this regard. We conclude that the term “evaluations per second” is patently ambiguous.

If a solicitation contains contract language that is patently ambiguous, a protestor cannot argue, before the Board or before this court, that its interpretation is proper unless the protestor sought clarification of the language from the agency before the end of the procurement process. See Grumman Data Systems, 15 F.3d at 1047; Lockheed Missiles & Space Co., 4 F.3d at 958; Community Heating & Plumbing Co., 987 F.2d at 1579; see also 48 C.F.R. § 6101.5(b)(3)(i) (1994) (“A ground of protest based upon alleged improprieties in any type of solicitation which are apparent before bid opening or the closing time for receipt of initial proposals shall be filed before bid opening or the closing time for receipt of initial proposals.”). There is no dispute that Grumman did not seek clarification of the provision before the end of the procurement process.

As mentioned above, during the procurement process at least one offeror submitted an inquiry regarding the meaning of “one million evaluations per second.” The inquiry asked:

How does the government define “evaluations per second”. We assume that when any device input changes, an evaluation is recorded. For example, a 2-input gate with both inputs connected to the same net to form an inve[r]tor would record two evaluations for each transition of the net driving the nand inputs. Is the above assumption correct?

The agency responded with the following:

The Offeror’s assumption is correct. An evaluation occurs when the simulator examines an element in the design and determines the value of its outputs based on the values of its inputs.

Intergraph asserts that the agency’s answer to the inquiry was itself patently ambiguous, and that Grumman’s failure to seek clarification of the answer’s meaning also prevents Grumman from asserting that its interpretation of the provision is proper. Grumman does not reply to this assertion.

The agency’s answer is susceptible to more than one meaning on its face. It can be read to indicate that an evaluation occurs when any device input changes. It can also be read to indicate that an evaluation occurs when the device outputs change. Indeed, Grumman’s expert witness on the meaning of the term “evaluations per second” indicated that the reply was a “yes-and-no” reply that is “subject to interpretation by he who reads it.” The Board explained that “the answer to the question as a whole supports the views of both sides.” As explained above, we afford that conclusion “careful consideration and great respect.” Community Heating & Plumbing Co., 987 F.2d at 1579. We find that the agency’s affirmation of the inquirer’s assumption in the first sentence of the reply glaringly inconsistent with the agency’s explanation of when an evaluation occurs in the second sentence of the reply.

Thus, we hold that both section Cll.3.1.1 and the agency’s answer to another offeror’s inquiry about that provision’s meaning are patently ambiguous. Grumman did not seek clarification of the patently ambiguous provision or the patently ambiguous answer. This failure prohibits Grumman from now arguing that its interpretation of the provision is the correct interpretation.

C.

The Solicitation provided that the “[flailure to meet [an MMR] shall eliminate an Offeror from further consideration.” Following the interpretation of section Cll.3.1.1 urged by the agency, the Board found that Inter-graph’s proposal met the provision’s requirements. Grumman asserts, however, that, even if the agency’s and Intergraph’s interpretation of section Cll.3.1.1 is used, Inter-graph’s proposal does not meet the requirements of the section.

A protestor seeking to overturn a finding of fact by the Board, including the Board’s determination that a proposal meets a particular Solicitation provision, bears the burden of demonstrating that finding is “fraudulent, or arbitrary, or capricious, or so grossly erroneous as to necessarily imply bad faith, or ... not supported by substantial evidence.” 41 U.S.C. § 609(b) (1994); CACI Field Servs., Inc. v. United States, 854 F.2d 464, 466 (Fed.Cir.1988). Grumman states that “[a]t the hearing, Grumman conclusively demonstrated Intergraph’s inability to perform even one million ‘input evaluations’ per second.” In making this argument, Grumman refers to Intergraph’s White Paper (a written submission describing the operation of Intergraph’s proposal), but does not point to its presence in the record. Nor does Grumman point to any evidence presented at the hearing, or to any other evidence in the record, to meet its burden in this regard. The agency and Intergraph, on the other hand, cite to evidence in the record that supports the Board’s conclusion that Inter-graph met the “one million evaluations per second” provision. We therefore uphold the Board’s finding that Intergraph’s proposal met the “one million evaluations per second” requirement as supported by substantial evidence.

III.

The Solicitation required that proposals use the Standard Query Language (“SQL”) database language. It further provided that all SQL implementations “shall have been tested ... by the National Computer Systems Laboratory (NCSL),” and that this test “shall be used to confirm that the implementation meets the FIPS [ (Federal Information Processing Standards) ] requirements ... specified in this document.” The Solicitation explained that

[t]o be considered responsive, the Offeror shall:
(1)(a) Certify in the offer that implementations of FIPS offered in response to this document have been previously tested or validated and included on the current list of validated products maintained by the NCSL....
Proof of testing shall be provided in the form of a NCSL registered validation summary report ... from the NCSL. Proof of validation shall be in the form of a NCSL Certificate of Validation.
(b) If the Offeror is unable to comply with paragraph 1(a) above, then as a minimum, the offeror shall have been scheduled for NCSL testing, such testing to be completed by contract award....
(2) Agree to correct all deviations from the applicable FIPS reflected in the validation summary report not previously covered by a waiver. All deviations must be corrected within 6 months from the date of contract award.

It is undisputed that Intergraph did not complete SQL testing before it was awarded the contract. Grumman asserts that the SQL testing requirement is a MMR, and that Intergraph’s failure to meet the requirement mandates a declaration that Intergraph is ineligible for the award as a matter of law. Intergraph responds that the SQL testing requirement allows an offeror to fail the test as long as defects are corrected within six months after award; accordingly, Intergraph argues, the requirement is not a MMR. The agency maintains that the requirement mandates only that the testing be scheduled at the time of bidding, which Intergraph had done.

The SQL testing requirement states that an offeror, to be considered responsive to the requirement, “shall” do two things: (1) either (a) certify that the offeror’s product has been previously tested or validated and is on the NCSL’s current list of validated products or (b) schedule NCSL testing and complete this testing “by contract award,”; and (2) state in the offer that the offeror agrees to correct all non-waived deviations from the applicable FIPS requirements that are shown in the validation report. Intergraph does not attempt to argue that it met part (l)(a) of the SQL testing requirement. Part (l)(b) plainly provides that, in the event that part (l)(a) is not met, testing must be completed “by contract award.” Intergraph is correct that the SQL testing requirement allows the successful offeror six months to correct certain defects, but this does not relieve the offeror from the requirement that testing must be completed “by contract award,” and does not deprive the requirement of MMR status. Thus, we find that Intergraph, in failing to complete SQL testing before it was awarded the contract, failed to meet a MMR of the Solicitation.

Intergraph and the agency argue in the alternative that even if the SQL testing requirement is a MMR, the award should be affirmed. Intergraph points out that it is undisputed that it passed the SQL testing requirement within 36 days of receiving the award. Intergraph and the agency claim that Intergraph’s failure to complete testing prior to award was a de minimis error in light of Intergraph’s successful completion of testing very soon after award and in light of the minimal nature of the deviation from the Solicitation’s requirements. Intergraph and the agency further assert that the policies and goals of economic and efficient procurement urge against setting aside a contract award for a minor matter.

A protestor bears the burden of proving error in the procurement process sufficient to justify relief. CACI Field Servs., 854 F.2d at 466. Not every error compels the rejection of an award. See SMS Data Prods. Group, Inc. v. United States, 900 F.2d 1553, 1557 (Fed.Cir.1990); Excavation Constr., Inc. v. United States, 204 Ct.Cl. 299, 494 F.2d 1289, 1293 (1974); 40 U.S.C. § 759(f)(5)(B) (1994). The Board must consider the significance of errors in the procurement process when deciding whether the overturning of an award is appropriate. Data General Corp. v. Johnson, 78 F.3d 1556, 1562 (Fed.Cir.1996); Andersen Consulting v. United States, 959 F.2d 929, 932-33, 935 (Fed.Cir.1992). We have held that de minimis errors do not require the overturning of an award. Andersen Consulting, 959 F.2d at 932-33, 935. “De minimis errors are those that are so insignificant when considered against the solicitation as a whole that they can safely be ignored and the main purposes of the contemplated contract will not be affected if they are.” Id. at 935.

The Board found Intergraph’s failure to undergo SQL testing before contract award to be “a minor matter, because by the terms of the [Solicitation] a failure on this point would not prevent contract award, a failed offeror would have six months to correct the deficiency, and even a failure to make corrections would not result in a contract breach.” We agree with the Board’s assessment on this point. We note also that the SQL testing requirement was one of 4000 MMRs and was the only one that Intergraph did not meet. We therefore hold that the Board did not err in concluding that Inter-graph’s failure to meet the SQL testing requirement was a de minimis error insufficient to justify the overturning of the award.

CONCLUSION

For the foregoing reasons, the decision of the Board denying Grumman’s protest of the Navy’s award of its automatic data processing equipment supply contract to Intergraph is affirmed.

COSTS

Each party shall bear its own costs.

AFFIRMED. 
      
      . GSBCA No. 12912-P-R, 95-1 B.C.A. ¶127,314, 1994 WL 645820 (Oct. 27, 1994).
     
      
      . The National Defense Authorization Act for Fiscal Year 1996 repealed 40 U.S.C. § 759, effective 180 days after February 10, 1996, thereby eliminating the Board's bid protest jurisdiction for automatic data processing equipment contracts. Pub.L. No. 104-106, §§ 5101, 5701, 110 Stat. 186, 680, 702 (1996).
     
      
      . In Lockheed Missiles & Space Co., the court stated: "[A] proposal which is one point better than another but costs millions of dollars more may be selected if the agency can demonstrate within a reasonable certainty that the added value of the proposals is worth the higher price." 4 F.3d at 960. Grumman seizes upon this language — from a case with facts different from those here — to argue that the SSA's rejection of the IAWG’s findings placed the burden on the agency to prove with a "reasonable certainty” that its selection of Intergraph’s proposal was justified, in order to meet the grounded-in-reason test. The contention is without merit. In B3H, the court rejected an identical argument: "Neither the language nor the facts of the case show anything other than that Lockheed ... is consistent with the principle that the Board’s task upon review of a best value agency procurement is limited to independently determining if the agency's decision is grounded in reason. If this court wishes to alter such a longstanding principle, it will do so explicitly with supporting rationale.” 75 F.3d at 1584.
     
      
      . Because we conclude that the SSA could reject the IAWG's conclusions based on this reason alone, we do not address the second reason provided by the SSA (relating to keystrokes).
     
      
      . The agency's arguments imply that the agency's position is that the ambiguity is latent, but the agency takes no explicit position on this question.
     
      
      . For example, one expert, Mr. Abdulrazzaq, testified that Intergraph complied with the require-merits of Cll.3.1.1.
     