This investigation is based on analysis of public procurement award notices published through Bangladesh’s electronic Government Procurement (e-GP) system. The unit of analysis is an awarded contract record as it appears in the dataset: who awarded it, who received it, when it was awarded, and the awarded value captured in the record.
Before analysis, supplier names were standardised to reduce duplicate spellings that can fragment a single firm into multiple entries. This standardisation removed common prefixes (for example, M/S or Messrs), trimmed inconsistent punctuation and spacing, and unified common legal forms (for example, Limited vs Ltd; Private vs Pvt). These steps aim to make firm-level counts more reliable, but they cannot fully resolve deeper identity issues such as ownership changes, subsidiaries, or firms with genuinely similar names.
Joint ventures and consortium-style awards were excluded from the calculations shown here. In award-to fields, multi-party arrangements often appear as “JV”, “Joint Venture”, or as paired names separated by characters such as “/”, “&”, “and”, or hyphenated partner listings. Rather than attempting to split these awards across partners – an approach that can introduce new errors – this story removes them so repeat-winner and concentration patterns reflect awards to single suppliers only.
Contract values were analysed using the cleaned value field in the dataset and summarised in crore BDT. Records with missing or non-numeric value entries were excluded from value totals but could still contribute to simple counts where appropriate. All year-by-year totals, firm totals, and ministry totals in the narrative and charts are computed from the same cleaned dataset so that numbers remain internally consistent across the page.
Because the story relies on administrative award data, it cannot directly measure the competitiveness of bidding (for example, how many firms submitted bids, how evaluation scores were assigned, or whether specifications were restrictive) unless those fields are available and linked.
To measure the “awarded in a day” pattern, we grouped records by award date (as recorded in the dataset). For each date, we calculated the number of contracts awarded, the number of distinct firms receiving awards, and the share of awards going to repeat winners. “Burst days” refer to dates when unusually large numbers of awards occur together; these are reported as descriptive patterns and are not, by themselves, evidence of wrongdoing.
“Repeat winners” in this story refer to firms that appear as winners more than once in the dataset (based on the canonical firm key). We used this definition to compute (a) how many repeat‑winner firms exist overall, (b) how frequently awards go to repeat winners on a given day, and (c) which firms and ministries dominate rapid-award dates. Because the dataset is award‑level, this analysis cannot directly measure bidder competitiveness, the number of bids received, or whether an award process complied with all procedural requirements.
This investigation does not claim that fast awards are automatically corrupt, illegal, or improper. Concentration and speed are treated as risk signals that can guide further reporting: identifying which tenders to inspect, which offices repeatedly approve awards on burst days, whether documentation was published late or incompletely, and whether the same suppliers appear across multiple rapid-award episodes.