Introduction
Coagulation testing is essential for the diagnosis and management of a variety of hemophilia, thrombophilia, and complicated coagulopathies.1 The most performed tests in a coagulation laboratory include prothrombin time (PT) with international normalized ratio (INR), partial thromboplastin time (PTT), fibrinogen, and thrombin time (TT). These tests are sometimes referred to as routine coagulation tests or screening tests. In clinic, these are used prior to surgery, or as a follow-up investigation for patients being scrutinized for bleeding diathesis, or to monitor the anticoagulant therapy. When the results are abnormal, that is, the results are outside the defined normal range, a follow-up assessment with more complex tests is often warranted. A mixing study is often the initial reflexive test that can provide valuable information to conclude the assessment, or help guide further investigations (Fig. 1). As the term suggests, a mixing study involves a step of mixing the patient’s plasma with normal pooled plasma (NPP) before performing the same assay.2 The rationale appears to be straightforward: When mixing “corrects” the test results, a factor deficiency would be suspected. Otherwise, the presence of an inhibitor would be more likely. Theoretically, the mixing study can be performed in any functional/activity assay to distinguish a factor deficiency from an inhibitor. However, the most common mixing studies are PT and PTT.
Mixing study procedure
The standard procedure for PT and PTT mixing studies is performed on a 1:1 mixture of the patient’s plasma and NPP. According to the Clinical and Laboratory Standards Institute (CLSI) guidelines,3 the NPP should comprise of a pool of a minimum of 20 donors with >80% of all coagulation factors, fresh frozen or lyophilized plasma, and a platelet count of <10×109/L. Regardless of whether the PT or PTT test is performed, the test is normally performed again immediately after mixing, or without “incubation”. Some laboratories also perform the coagulation test after incubating the mixed sample at 37°C for one or two hours, in order to increase the detection sensitivity of specific factor inhibitors, such as factor VIII (FVIII) inhibitors and time/temperature-dependent lupus anticoagulants (LAs). In addition, various mixing ratios of patient plasma and NPP have been used.4 However, this is not the common practice.
Interpretation of mixing study results
As aforementioned, a correction in the coagulation test after mixing indicates a factor deficiency, while no correction or a partial correction raises the possibility of a coagulation factor inhibitor. However, defining “correction” remains difficult and controversial. There are several available methods to determine whether a result is corrected.
One method is to utilize laboratory defined normal ranges.5 If a coagulation test result is within the normal range after mixing, it is deemed “corrected”. This method works well for samples with a single factor deficiency, because 50% of the deficient factor provided by the NPP can generate a normal test result. The present study and previous studies have shown that having multiple factors at levels close to 50 U/dL can result to prolonged clotting times.6 Multiple factor deficiency can result from vitamin K deficiency, liver disorder, disseminated intravascular coagulation (DIC), or dilutional coagulopathy in a trauma patient after fluid resuscitation. This represents a primary deficiency pattern that is far more common than any single factor deficiency in the adult patient population where the mixing study is performed. As a result, when this drawback is not acknowledged by the clinical team, a falsely called “non-corrected” result in a patient with vitamin K deficiency can potentially lead to the misdiagnosis of a coagulation inhibitor, and/or the missed chance or delay in vitamin K treatment. Another drawback of this method is that a slightly prolonged clotting time due to a weak inhibitor can be falsely “corrected” due to inhibitor dilution.7
Two other common methods are the percent correction method4 and the index of circulating anticoagulant (ICA) or “Rosner Index”,8 and these are calculated using the following formula, respectively (CT = clotting time):
% Correction=patient CT−1:1 mix CTpatient CT−NPP CT×100;Rosner index=1:1 mix CT−NPP CTpatient CT×100.
Each laboratory needs to establish a cut-off value to distinguish factor deficiency vs. coagulation inhibitor. A result above the cut-off value for percentage correction or below the cut-off value for the Rosner index is considered a correction. Otherwise, this would be considered as a non-correction. Compared to the normal range method, these methods can theoretically reduce the false “correction,” because these compare the clotting time of the mixture plasma to that of normal plasma. However, as explained above, the effect of mixing on clotting time correction differs between single factor deficiency and multiple factor deficiency. Hence, separate cut-off values need to be established to reliably interpret the mixing study result when using these two methods.
It has been established that the clotting time of a particular coagulation assay is strongly associated with the factor levels, when the sample lacks any inhibitors.9 This provides a mathematical basis for the estimation of clotting factor levels based on given clotting times. In the laboratory of the investigators, the estimated factor correction (EFC) method is use to interpret the mixing results.6 Initially, a factor specific regression curve is determined using the factor deficient plasma, and a regression formula is generated. Then, using this formula, the estimation of the factor level is made based on the original clotting time. Afterwards, the factor level in the mixture is readily calculated with a fixed 1:1 mix ratio. Using the same regression curve, the clotting time can be reliably predicted from the estimated factor level in the mixture. If the actual clotting time of the mixture is less than the predicted value, this is considered a correction. A critical step in this approach is that the factor deficiency pattern needs to be initially determined based on the patient’s clinical history and/or pattern of abnormal coagulation tests. For example, an isolated PT prolongation that suggests a factor VII deficiency, which is likely due to vitamin K deficiency (hereditary or acquired factor VII deficiency), is extremely rare. The major drawback of this method is the need for a series experiments to establish the regression equations for different types of factor deficiencies.
A common pitfall in the interpretation of a mixing study result is solely attributed to the findings, that is, whether this is due to a factor deficiency or the presence of an inhibitor. It is quite common for the same patient to have both. For example, it is known that the positive rate of a lupus inhibitor can reach >50% in intensive care unit patients.10 These patients are also at high risk of vitamin K deficiency11 due to malnourishment, antibiotic use, or comorbidity of liver dysfunction. Therefore, although a complete correction can be interpreted as factor deficiency, a partial or incomplete correction does not rule out coexistence of factor deficiency and the presence of a coagulation inhibitor. It is also important to review both PT and PTT, and the mixing study results.
Incubated mixing study
In some coagulation laboratories, the PTT mixing study is performed immediately after the patient’s plasma is mixed with the NPP, and after an incubation period of 1–2 hours at 37°C. Inhibitors that are against some specific clotting factors (most commonly FVIII) are more evident after incubation.12 Reflexive factor activity assays and inhibitor measurement, which is known as the Bethesda assay, are required to confirm the presence of a factor specific inhibitor. Some laboratories do not perform an incubated mixing study. Instead, the factor activity is measured based on the clinical suspension of a specific factor inhibitor. However, the further prolongation of clotting time in an incubated mixed sample is not the unique characteristic of specific factor inhibitors, in which approximately 15% of LAs are time/temperature-dependent,13 and this is likely due to the pH change during incubation.14
Other challenges in the interpretation of mixing studies
In clinic, a strong LA that prolongs the clotting time after incubation can present as a technical challenge, in which the factor levels can be falsely low, and the Bethesda assay would reveal a strong inhibitor. Furthermore, more than one factor usually presents a decreased activity when the abnormality is due to a strong LA, and an “inhibitor” pattern is often observed. That is, the measured factor activity increases with the increase in dilution ratio, because the inhibitor becomes weaker with each dilution. In order to confirm this, a chromogenic factor assay is normally required, because this is not affected by the LA. Another challenge in the interpretation of mixing study results is due to the increase in use of direct-acting oral anticoagulants (DOACs). These agents work on common pathway factors, such as thrombin and factor Xa. Furthermore, the inhibitory pattern to PT and PTT can vary in individual patients.15–17 Therefore, a careful review of the patient’s medication list is highly recommended when interpreting mixing study results. The effects of anticoagulants can be assessed by special tests, such as anti-Xa chromogenic assays calibrated with rivaroxaban or apixaban18 (Fig. 1). However, it is noteworthy that the evidence of DOAC use should not exclude the underlying factor deficiency and/or coagulation inhibitors. Coagulation studies need to be repeated when the patient is not on anticoagulants.
Mixing studies in other coagulation tests, besides PT and PTT
In addition to the well-defined use of mixing studies for PT and PTT, mixing studies are also used in other coagulation assays. Both the CLSI and International Society on Thrombosis and Haemostasis (ISTH) recommend a mixing study step in the guidelines for LA testing.3,19 However, there is no consensus on when, how, or whether to perform the mixing study in LA assays.20,21
Conclusions
A mixing study is a useful strategy to help distinguish the etiology of a coagulopathy. This can be performed in a manner similar to routine coagulation assays, with only minimally added manual procedures. However, caution should be taken in the interpretation of mixing study results, since each approach has its own advantages and limitations. A patient’s clinical history, especially the bleeding and thrombosis history, is critical in the interpretation of initial abnormal coagulation results. Furthermore, the medication history of DOAC needs to be reviewed for complicated cases. More advanced and specialized assays are often required to unravel particularly complex cases.
Abbreviations
- PT:
prothrombin time
- INR:
international normalized ratio
- PTT:
partial thromboplastin time
- TT:
thrombin time
- NPP:
normal pooled plasma
- CLSI:
Clinical and Laboratory Standards Institute
- FVIII:
factor VIII
- LA:
lupus anticoagulant
- DIC:
disseminated intravascular coagulation
- CT:
clotting time
- VK:
vitamin K
- VKD:
vitamin K dependent
- DOAC:
direct-acting oral anticoagulant
- ISTH:
International Society on Thrombosis and Haemostasis
Declarations
Acknowledgement
This manuscript was presented during the 7th Chinese American Pathologists Association Diagnostic Course on September 12, 2021, via virtual format.
Funding
None.
Conflict of interest
The authors have no conflicts of interest to disclose.
Authors’ contributions
JC planned and wrote the manuscript, and ML revised the manuscript. All authors edited and approved the final manuscript.