Entry 0036
Ghost Capacity in Condiment Plants: How Hold-and-Release Cycles Destroy Throughput the Dashboard Never Measures
Truth: Modeled scenarioOpening Insight
In sauce, dressing, and condiment plants where quality holds exceed 3 percent of weekly batch volume, effective throughput per shift drops by 10 to 20 percent even when the filling line never stops. The loss does not appear in downtime logs. It appears in the gap between what the schedule promised and what the palletizer actually wrapped. When we model hold-and-release patterns across multi-SKU condiment operations, the dominant throughput constraint is not defect rate, equipment speed, or labor availability. It is the inventory shock created when held batches re-enter a schedule that has already moved on.
This is not a quality problem. It is a scheduling detonation disguised as a quality problem.
You think you are managing defect rate; you are actually managing the time-delay between a hold event and its disposition, because that delay is what converts a single batch anomaly into a system-wide throughput loss. The defect itself is often minor. A viscosity reading outside spec, an emulsion that broke partially during thermal processing, a pH drift on an incoming tomato paste lot. The damage is not the defect. The damage is what happens to every process upstream and downstream while the plant waits for someone to decide what to do with the held product.
System Context
A typical sauce and condiment operation runs a process chain that looks deceptively simple: batching (kettles or continuous blending), thermal processing (HTST or batch cook), filling, capping, labeling, case packing, palletizing. Between batching and filling sits a surge tank or hold tank that buffers flow. Between filling and shipping sits a warehouse that absorbs timing mismatches. The system appears linear.
It is not. The batching side operates in discrete lots, often 500 to 2,000 gallon kettles, each with its own batch record, lot trace, and quality release gate. The filling side operates continuously or semi-continuously, pulling from surge tanks at rates governed by filler head speed and container format. The mismatch between discrete batch release and continuous fill demand is managed by the surge tank buffer. When that buffer absorbs a hold event, it does not just lose one batch worth of product. It loses the scheduling slot that batch was supposed to occupy, and every downstream process that was synchronized to that slot.
Condiment plants typically run 8 to 25 SKUs across shared filling lines. Changeovers between SKUs with different viscosities, particulate loads, or allergen profiles require CIP cycles ranging from 20 to 90 minutes depending on the transition. The production schedule is built around minimizing these transitions. A hold event does not just remove a batch from the flow. It forces the schedule to either wait (consuming time) or skip ahead to the next SKU (consuming a changeover and CIP cycle that was not planned). Either path costs capacity.
The raw materials feeding these systems introduce their own variability. Incoming tomato paste, oil, vinegar, and spice blends arrive with lot-to-lot variation in pH, Brix, viscosity, and moisture content. When upstream raw material variability is high, the probability of a batch falling outside spec increases, which increases hold frequency, which amplifies every downstream disruption the schedule was designed to avoid. The system is a chain of dependencies where the weakest link is not any single piece of equipment. It is the interaction between batch-level quality decisions and a schedule that cannot absorb them.
Mechanism
The primary mechanism operates through a specific causal chain: a quality hold removes a batch from the production sequence, the surge tank drains while disposition is pending, downstream filling either starves or the schedule advances to a different SKU, the unplanned SKU change triggers a CIP cycle, and when the held batch is eventually released, it must be reinserted into a schedule that no longer has a slot for it. Hold-and-release cycles create WIP spikes that choke downstream filling and packaging because the released product arrives in a bolus, not a steady flow.
When we model this sequence, the time constants matter enormously. A simulation of a 12-SKU condiment line with two kettles and one filling line suggests the following dynamics. A single hold event on a 1,000-gallon batch, assuming a 4-hour disposition time, generates between 45 and 90 minutes of direct schedule disruption. But the indirect disruption, the CIP cycle triggered by the unplanned SKU switch, the rework of the released batch competing with first-pass production for filler time, and the WIP accumulation in the surge tank when the released batch finally flows, compounds to 2 to 4 hours of effective capacity loss.
The relationship between hold frequency and throughput loss is not linear. It inflects at roughly 4 to 5 percent hold rate by batch count per week, where the schedule loses the ability to recover between disruptions. Below that threshold, the surge tank buffer and schedule slack absorb the shock. Above it, each new hold event lands on a system that is still recovering from the previous one. The WIP spikes overlap. The CIP cycles stack. The filling line is running but not producing at its rated throughput because it is constantly transitioning between products it was not scheduled to run in that sequence.
The physics of the hold itself are straightforward. A batch is flagged, typically at the batch record review or inline quality checkpoint. It moves to a hold tank or remains in the kettle, occupying that vessel. The kettle is now unavailable for the next scheduled batch. If the plant has two kettles, it has lost 50 percent of its batching capacity for the duration of the hold. If it has four, it has lost 25 percent. The constraint is not the defect rate. It is the vessel-occupation time multiplied by the disposition latency.
vessel-occupation time multiplied by disposition latencyWhen we model disposition latency as a variable, the results are stark. Reducing average disposition time from 6 hours to 2 hours, with no change in hold frequency, recovers 5 to 8 percent of weekly throughput in a plant running 15 or more SKUs. The defect rate is identical in both scenarios. The system behavior is entirely different.
System Interaction
The primary mechanism couples with two secondary mechanisms that form a reinforcing causal chain, not independent problems.
First, disposition latency is often the real constraint, not the defect itself. When we model the decision pathway for a held batch, the clock starts at the quality flag and stops at the release or rework decision. In between sits a sequence of human decisions: QA review, lab retest, supervisor approval, sometimes customer notification or regulatory consultation. Each step has its own queue time. A simulation of disposition workflows across multi-shift condiment operations suggests that 60 to 80 percent of total hold duration is queue time between decision steps, not testing or evaluation time. The batch is not being analyzed. It is waiting for someone to look at it.
60 to 80 percent of hold duration is queue timeThis latency interacts with upstream raw material variability in a way that creates emergent instability. When an incoming tomato paste lot arrives with pH 0.2 units below the historical mean, the batching team adjusts the formula. But the adjustment may push viscosity or color outside the spec window for a different reason, triggering a hold on a batch that would have passed with the previous lot of paste. The hold-and-release cycles that create WIP spikes downstream are not random. They cluster around raw material lot transitions. When we model hold event timing against incoming lot change dates, the correlation is 0.4 to 0.6 in plants running commodity-grade ingredients with significant supplier variation.
Second, rework consumes the same line capacity as first-pass production. A held batch that is dispositioned for rework, blending back into a new batch at a reduced ratio, or reprocessing through the kettle, does not use a separate line. It uses the same kettles, the same surge tanks, the same filler. Every hour of rework is an hour of first-pass production that did not happen. In a plant running at 85 percent or higher utilization, rework does not fit in the margins. It displaces scheduled production, which pushes that production into overtime or the next shift, which compresses the schedule further, which reduces the system's ability to absorb the next hold event.
This is a cumulative exposure problem: each hold event degrades the system's resilience to the next one, and the degradation is invisible until the schedule collapses.
Economic Consequence
The economic damage from hold-and-release disruption operates through three channels simultaneously, which is why conventional cost accounting misses the total impact.
The first channel is lost throughput value. When a filling line rated at 200 cases per minute runs effectively at 160 to 170 cases per minute because of WIP-induced starvation and unplanned CIP cycles, the lost 30 to 40 cases per minute represent revenue that was scheduled but never produced. In a condiment plant running $8 to $15 wholesale per case, a simulation suggests this translates to $3,000 to $6,000 per shift in unrealized throughput value. Across a 5-day, 2-shift operation, that is $30,000 to $60,000 per week in Ghost Capacity, output the schedule assumed but the system could not deliver.
The second channel is energy per unit. Every unplanned CIP cycle consumes hot water, caustic, rinse water, and the energy to heat and pump all of it. When we model CIP frequency against hold rate, a plant experiencing 5 percent batch hold rates runs 15 to 25 percent more CIP cycles than the same plant at 2 percent hold rates. The energy cost per unit of saleable product rises not because the process is less efficient, but because the system is cleaning more often to accommodate schedule disruptions it did not plan for.
15 to 25 percent more CIP cycles than plannedThe third channel is inventory carrying cost. Held product sitting in tanks or totes is working capital that is not moving. A 1,000-gallon batch of premium dressing held for 8 hours represents $4,000 to $12,000 in raw material value that is neither saleable nor dispositioned. Multiply by 3 to 5 hold events per week and the plant is carrying $12,000 to $60,000 in perpetually uncertain inventory. This inventory does not appear on the aged-inventory report because it has not aged. It is simply stuck.
The compounding effect is what matters. These three channels do not add linearly. They multiply through schedule fragmentation. The lost throughput forces overtime to recover volume. The overtime shifts run with fatigued crews that generate more quality deviations. The deviations generate more holds. The system is running. It is not producing.
Diagnostic
The signature of hold-and-release throughput loss is a specific pattern that looks nothing like a quality problem on a standard dashboard. If your OEE is above 80 percent but your actual cases per shift are trending down over 4 to 8 weeks, and your unplanned CIP cycles are increasing while your changeover count is stable or declining, you are not looking at an equipment degradation problem or a labor efficiency problem. You are looking at hold-and-release cycles that create WIP spikes that choke downstream operations without ever registering as downtime.
The second diagnostic signature is a mismatch between your hold rate and your rework rate. If holds are running at 4 to 6 percent of batches but rework is only 1 to 2 percent, the gap represents batches that were eventually released as-is after consuming hours of disposition latency. The product was fine. The system paid the full disruption cost anyway.
The third signature is temporal clustering. If you plot hold events on a timeline and they cluster around raw material lot transitions rather than distributing randomly across the week, your hold problem is an incoming variability problem wearing a quality mask. The intervention is not better QA staffing. It is tighter incoming material specifications or pre-blending strategies that dampen lot-to-lot variation before it reaches the kettle.
Decision Output:
- Decision type: Sequence or build. Should the plant invest in additional surge capacity (tanks, holding vessels) or restructure the disposition workflow and incoming material strategy?
- Trigger: Hold rate exceeding 4 percent of weekly batch count, combined with CIP frequency exceeding changeover frequency by more than 20 percent.
- Action: Model disposition latency reduction first. Simulate the throughput recovery from cutting average disposition time by 50 percent before approving capital for additional buffer tanks.
- Tradeoff: Faster disposition requires QA authority delegation and pre-approved rework protocols, which increases the risk of releasing marginal product. The plant trades quality conservatism for schedule stability.
- Evidence: Compare weekly throughput per shift against hold event count and average disposition hours. If the correlation is stronger with disposition hours than with hold count, latency is the binding variable.
Framework Connection
This mechanism is a reliability problem that masquerades as a quality problem and ultimately manifests as a capacity problem. It maps directly to the reliability pillar: the variance introduced by hold-and-release cycles determines whether the plant can commit to a production schedule and, by extension, to customer delivery dates and revenue forecasts.
The constraint analysis method reveals that the binding constraint is not the defect that triggers the hold, nor the filling line that starves, but the disposition decision queue that sits between them. This is a constraint that does not appear on any equipment list or capacity model. It is an organizational process with its own throughput rate, and that rate governs the effective capacity of every physical asset downstream.
The counterfactual experimentation method is what makes this visible. When we model the same plant with identical hold rates but different disposition latencies, the throughput difference is 8 to 15 percent. No physical asset changed. No capital was deployed. The system's capacity shifted because an information flow changed speed. This is the Simulation Gap in practice: the difference between what a spreadsheet capacity model predicts (based on equipment rates) and what a dynamic simulation reveals (based on system interactions, buffers, and decision latencies). Ghost Capacity lives in that gap. It is capacity that exists on paper, that the equipment can physically deliver, but that the system's interaction patterns prevent from being realized.
Strategic Perspective
Most capital requests for additional hold tanks or buffer vessels in condiment plants are attempts to solve a decision-latency problem with steel. The WIP spikes that choke downstream operations are not caused by insufficient tankage. They are caused by batches that sit in existing tanks too long because the disposition workflow has no throughput target.
The decision-distortion chain is clear: hold events create throughput loss. The loss is not measured as hold-related because OEE attributes it to minor stops, speed loss, or changeover time. Leadership sees declining throughput and approves capital for a faster filler or an additional surge tank. The new asset arrives. The hold-and-release pattern continues. Throughput improves marginally because the buffer is larger, but the underlying instability remains. The next capital cycle, the same request returns with a bigger number.
An executive could repeat this in a capital review: "We are not short on capacity. We are slow on decisions. Every hour a batch sits in hold, we lose three hours of downstream schedule integrity."
every hour in hold costs three hours downstreamThe forward-looking implication is that plants which treat disposition latency as an operational metric, with the same rigor they apply to filler speed or CIP turnaround, will recover capacity that their competitors are still trying to buy. The capacity already exists. It is trapped behind a queue that nobody is measuring.
Related Entries
- Entry 0043Changeover Frequency and the Thermal Exposure Cascade in Frozen Food Packaging Systems
- Entry 0039Quality Holds Are Not a Quality Problem: How Disposition Latency Consumes Bakery Capacity
- Entry 0034The First-Hour Problem: How Shift Handoff Information Loss Traps Throughput in Frozen Food Operations