INTRODUCTION
The introduction of newer immunosuppressive agents, combined with a more widespread use of induction therapy for high risk patients resulted in a substantial reduction of early acute rejections and improved one-year graft survivals; however, these short-term achievements are not matched by similar gains in long-term outcomes of renal allografts[1-3]. With more potent immunosuppression, complications of the therapy evoked a paradigm shift by many clinicians, moving away from further intensification of immunosuppression and to re-focus attention for preventing adverse effects of the immunomodulating therapy such as viral infections, malignancy and inherent renal toxicity[4]. This seemed to have ushered a new era in immunosuppression for renal transplantation: one in which immunosuppressive therapy was strong enough to consider the reduction or elimination of individual immunosuppressive agents associated with long-term toxicities. Thus, the concept of minimization was born. However, minimization seemed to have created yet more controversy: the potential for more rejections with steroid minimization[5,6], increased donor-specific antibody (DSA) development after calcineurin withdrawal[7] and increased graft loss and mortality with mechanistic (mammalian) target of rapamycin (mTOR) inhibitor-based or calcineurin inhibitor (CNI)-free regimens[8,9]. How could we benefit from the fashionable concept of personalization in the field of immunosuppression after renal transplantation? Perhaps, reading the small prints from studies attempting minimization and combining such information with everyday clinical experience might help us to individually tailor immunosuppressive drug combinations. Specifically, while awaiting newer, more potent agents with less toxicity assessing an individual patient’s immunological and metabolic risk profile, having appropriate post-transplant screening and attentiveness for adverse events may help us take advantage of what we already have and arrive at the most suitable combination for an individual patient.
ATTEMPTS AT MINIMIZATION: GLUCOCORTICOIDS
The metabolic, bone and cardiovascular side-effects of glucocorticoid hormones, commonly referred to as “steroids” made them a logical target for drug minimization[10]. Given the ever increasing proportion of incident end-stage kidney disease attributable to diabetic nephropathy, glucocorticoid minimization or avoidance maintained steady popularity in the transplant literature[11-14]. Among the more recent studies comparing “steroid-free” regimens to a triple combination of immunosuppressive agents containing glucocorticoids, the FREEDOM trial[5] showed more early acute rejections but a non-inferiority of patient or graft survival in the steroid-free groups. Metabolic side effects known to be associated with glucocorticoid hormones were also reduced. However, in this trial patients with presumed higher immunological risk were excluded, including those receiving allografts from marginal donors or with longer cold ischemia times, recipients with higher panel-reactive antibodies titers, as well as re-transplants. Similar results were obtained in the tacrolimus-based, steroid-free regimens in renal transplantation (ATLAS) trial[6], showing higher acute rejection rates not translating into inferior outcomes but a trend towards better cardiovascular risk profile in the recipients. Furthermore, in the ATLAS trial (a multi-center study of European patients) subjects were at low risk for immunological complications. A retrospective study conducted in the United States on re-transplant patients receiving rabbit-derived anti-thymocyte globulin (rATG) induction therapy[15] showed relatively low rates of acute rejections in both the steroid withdrawal and triple therapy groups. While these and other studies tend to show non-inferiority of steroid-free maintenance regimens in low risk patients - and perhaps a hint that in higher risk patients receiving induction therapy early withdrawal may be safe - it remains unclear whether the improvements in metabolic complications, including new onset diabetes[16], skeletal complications including fracture risk[17] are sufficiently counterbalancing the risk for long-term immunological complications in these patients. How would tailoring help then? Perhaps the issue of glucocorticoid withdrawal can be used as the most obvious example of personalized immunosuppression. Patients with low immunological risk, or those at a higher immunologic risk but also at risk for metabolic complications could be candidates for glucocorticoid withdrawal, coupled with induction therapy as well as a more intense screening for acute or subclinical rejections, considering the negative impact of acute rejections[18] and increased rates for DSA[19] in this setting. On the other hand, the possibility of increased risk for antibody-mediated rejection after steroid withdrawal in high-risk populations is currently not sufficiently explored. This incomplete state of understanding underscores the importance of close long-term follow-up with increased screening efforts for such patients.
CNI MINIMIZATION: THE FOR AND AGAINST
Since their introduction into maintenance immunosuppression in renal transplant recipients, CNI have greatly contributed to the reduced incidence of acute rejections and improved immediate graft survival[20]. In combination with mycophenolate mofetil and low-dose glucocorticoids, they remain the most popular choice for de novo patients in transplant programs throughout North America[21]. However, CNIs are known to have a narrow therapeutic index, require a close monitoring of serum levels and are associated with cumulative renal toxicity. Long-term administration CNI agents may result in renal impairment in both renal[22] and non-renal organ transplant recipients[23], which have led to some disenchantment with CNI in the transplant community[4]. In the background of such functional decline, a distinct histological pattern has been identified with a striped pattern interstitial fibrosis and arterial hyalinosis[24], albeit the specificity of this entity has been challenged recently[25]. The observation that most survival benefits from newer drug combinations, including CNIs is manifested in the first year after transplantation led many to conclude that there may be a dual pattern of graft loss etiology in the post-transplant course after renal transplantation[26]. According to this view, immunological mechanisms may play a prominent role early on manifesting as subclinical rejection on protocol biopsies. Later on, the cumulative toxicity from CNIs may become progressively more significant. This model has led to the development of a dual strategy involving an initial higher intensity immunosuppression with a relative tapering of immunosuppressive drug dosages later on, specifically targeting a lower dose and target levels of CNI during the late transplant course. Nonetheless, an alternative strategy would be the complete elimination of CNI drugs with or without alternative agent(s) introduced. An early study from Australia showed that in patients with low-to-moderate immunological risk, CNIs could be withdrawn within the first year after transplantation with favorable long-term results using graft loss as the primary endpoint[27]. Early studies involving mTOR inhibitors also seemed to have shown promising results as discussed in the chapter below. However, this strategy has been recently challenged by newer studies taking advantage of recent developments in the diagnostic armamentarium for antibody-mediated rejection. Renal allograft biopsies taken “for cause” in North American transplant centers[28] showed that humoral rejection may be the single most important etiology behind a declining graft function. In this particular series, calcineurin toxicity seemed much less prominent than previously reported. The same study drew attention to the significance of non-adherence to immunosuppressive regimens, possibly enhancing the role played by immunological mechanisms in these patients. Under such circumstances, inadequate immunosuppression due to non-adherence may substantially contribute to graft loss. In the opinion of the authors of this paper, this is a crucial point which may not be emphasized enough for daily practice transplant medicine.
The diagnostic accuracy of CNI toxicity[25] and the very notion that progressive decline in graft function may be associated with chronic calcineurin toxicity has also been called in question by some[29] arguing that in the absence of DSA and serum complement factor 4, d-fragment (C4d) staining the histological diagnosis of “calcineurine inhibitor toxicity” carries a relatively good prognosis. Understanding the relative importance of these contributing mechanisms is not at all trivial If CNI toxicity is relatively common even at dosages currently in use, then CNI minimization is a valid strategy aiming at preserving functional renal parenchyma and maintaining longevity of grafts. If, on the other hand, antibody-mediated mechanisms play a more prominent role in patients with higher immunological risk, CNI minimization may be counter-productive by lowering anti-rejection defense at a time when such is most needed. This state of affairs clearly points to the importance of developing screening tools to identify patients at higher risk for antibody-mediated rejection. This would allow us tailoring in lieu of minimization: those more at risk for antibody-mediated immune mechanisms would be maintained on relatively higher doses of CNIs with or without low dose glucocorticoid hormones, while those at low risk may be more suitable candidates for calcineurin minimization or withdrawal. Do we have these screening tools in 2015? If so, how should we use them?
INDIVIDUALIZATION: RISK PROFILE AND SCREENING TOOLS
It has been well recognized that a number of donor and recipient-related factors as well as factors associated with preservation injury may influence the risk of graft loss after renal transplantation. In fact, a scoring system predicting graft loss has been developed on such basis[30]. It is logical to assume that patients with higher risk for graft loss may need more potent immunosuppression in the early post-transplant period with induction therapy and a CNI-based triple combination. Keenly aware of the cumulative toxicity associated with such therapies, including viral infections [cytomegalovirus (CMV), polyoma-BK virus, Epstein-Barr virus infections], malignancy and renal toxicity, calcineurin minimization or withdrawal with or without replacement of CNIs by alternative agents have been attempted both early and late after transplantation[27,31-36]. These studies showed divergent results: some showing benefit with better renal function after CNI minimization[27,31,33-35], while others failing to show such favorable outcomes[34,36]. Overall, the main factors predicting a favorable outcome are well-preserved initial renal function (glomerular filtration rate > 40 mL/min per 1.73 m2), lower levels of proteinuria (< 1 g/d), absence of previous acute or subclinical rejection and no subsequent appearance of donor-specific anti-human leukocyte antigen antibodies[36,37]. A recent report on 5-year outcomes of patients converted to everolimus four and half months after transplantation under the auspices of the ZEUS trial[38] confirms the safety and tolerability of such an approach with a low mortality rate (< 3%), a fairly high rate of patients remaining on mTOR inhibitor after 5 years (62.6%) and an adverse event rate not significantly different from the control arm (i.e., patients remaining on cyclosporine). An increased incidence of mild acute rejections did not seem to translate into worse function or graft loss; on the contrary eGFR remained higher in the everolimus group (estimated GFR 66.2 mL/min per 1.73 m2 with everolimus vs 60.9 mL/min per 1.73 m2 with cyclosporine-A; mean difference 5.3 mL/min per 1.73 m2 in favor of everolimus in intent-to-treat population). While these results are encouraging suggesting that mTOR inhibitors may represent a viable alternative to CNIs in certain low risk patients, concerns for increased de novo DSA production and proteinuria remain, particularly when an mTOR-based regimen is compared to the slightly more contemporary tacrolimus-based regimens.
In order to optimize the decision making process to individually tailor immunosuppression according to the patient’s actual needs, we should take full advantage of the screening tools already available to identify cases with ongoing subclinical antibody-mediated injury in the renal graft. Protocol biopsy has been shown to be a useful tool in identifying patients with subclinical rejection early in the post-transplant course[26]. The recognition that subclinical rejection did appear in a substantial number of patients within the first year after kidney transplantation may be instrumental in guiding our therapy further. Histological lesions found on protocol biopsies may be even more predictive when coupled with the presence of donor-specific antibodies. It has been shown that the combined appearance of C4d staining and DSA is associated with a substantially worse graft survival when compared to either presenting alone. The presence of DSA, nonetheless, appears to be an independent predictor of graft loss[39,40]. Moreover, the appearance of DSA is associated with non-adherence and prior rejections[39] as well as an mTOR-based immunosuppression compared to CNI use[7]. Though DSA monitoring has recently been introduced into routine clinical practice, there are no clear guidelines on how to use this information. With the presence of extremely sensitive techniques to identify DSA at low titers in otherwise completely asymptomatic and stable patients, what should be the next logical step after identifying de novo appearance of DSA? Perhaps the presence of C4d or subclinical rejections on protocol biopsies or the presence of progressive and otherwise unexplained albuminuria may strengthen the case for a more aggressive treatment strategy in these patients. Persistent proteinuria was part of the early definitions of chronic kidney disease[41] and it has long been known to be an important cardiovascular and renal predictor in both diabetic and non-diabetic renal disease. In addition, proteinuria is common after renal transplantation and it has been identified as an important predictor for graft loss, adverse cardiovascular events and increased overall mortality in renal transplant recipients[42]. It is also predictive of adverse outcomes at low levels when presenting early after transplantation[43]. Moreover, proteinuria is a consistent feature in acute rejection and is one of the clinical hallmarks in transplant glomerulopathy. Furthermore, a link seems to exist between appearance of DSA and proteinuria, whereas proteinuria seems to precede the appearance of DSA and appears to be an important factor predicting rapid decline of graft function[44]. Additional efforts to explore the relationship between de novo appearance of DSA and low-level proteinuria in otherwise clinically stable patients may prove to be useful in the clinical decision-making process for such patients. In the absence of definitive studies on this subject, close monitoring of proteinuria may be advisable in all patients. Persistent proteinuria even at low absolute levels should alert one to the possibility that a subclinical antibody-mediated process may be at work. In such patients, minimizing or withdrawing CNIs or steroids may prove to be deleterious.
MINIMIZATION AND THE ROLE OF MTOR INHIBITORS
The early promise of mTOR inhibitors was that they could potentially provide some relief from the long-term toxicities of CNIs[45]. Antiproliferative, antitumoral[46-48] and antiviral effects, including effects against CMV[45,49], polyoma-BK[50] and other viruses[47] coupled with a lack of nephrotoxicity[45] appeared attractive properties and fit right into the strategy of CNI minimization or withdrawal at either an early or later time point after transplantation. It soon became apparent, nonetheless, that the role for mTOR inhibitors may be limited in the setting when a certain amount of cumulative damage due to CNI toxicity has already been reached. In a 24-mo efficacy and safety conversion trial from calcineurin inhibitors to sirolimus maintenance therapy in renal allograft recipients trial showed that no apparent graft survival benefit could be achieved after substitution of CNIs for mTOR inhibitors in patients with already low GFR or substantial proteinuria[36]. However, multiple trials suggested that earlier introduction of mTOR inhibitors coupled with dose reduction (i.e., an mTOR/calcineurin combination)[51] or conversion to an mTOR inhibitor with complete CNI withdrawal[32,33,35,49,52] may be beneficial in terms of preserving renal function and lowering the incidence of both CMV infections[53], polyoma BK virus infection[50] and malignancy[54-56]. However, concerns have been raised about such strategies due to a number of emerging issues associated with mTOR inhibitors including non-adherence to protocols[34], increased mortality and graft loss[8,9,35], worsening proteinuria[35] and increased incidence of DSA[7]. Partly due to these considerations and perhaps even to a larger extent due to an unfavorable adverse effect profile associated with mTOR inhibitors, the use of this strategy has sharply declined in North America[21]. This, in turn, gave rise to a dichotomy between the United States and other developed regions in terms of immunosuppressive strategies, a pattern curiously reminiscent of what we had observed during international comparisons of hemodialysis practices[57]. Strangely, a dichotomy also seems to exist in terms of graft survival[58], a phenomenon certainly not yet sufficiently analyzed. While in the United States most programs appear to favor a more homogeneous approach with induction therapy, tacrolimus, mycophenolate with or without maintenance steroids[21], in Europe several programs use mTOR inhibitor-based combinations reporting more favorable clinical outcomes, particularly in low risk patients[37]. What may lie behind such differences? Due to the lack of reliable data, the authors are forced to rely on their own experiences. While there may clearly be important differences in the immunological risk profiles and perhaps in drug metabolism in different patient populations, there also seems to be important regional differences in mTOR inhibitor dosing. North American studies reporting higher mortality and graft loss reported mTOR inhibitor dosages and levels substantially higher[9] than we have seen in some European programs and these higher dosages were, in turn, associated with more frequent adverse effects and non-adherence to mTOR-based regimens. This latter point cannot be emphasized sufficiently. Lower adherence may be associated with graft loss and antibody-mediated humoral mechanisms[28] and in many instances might be due to higher-than-tolerable dosing in an important minority of the patients. This might suggest that such patients could benefit from dose reduction. However, such a strategy is possible only when a sufficiently close follow up is in place to uncover tolerability-limiting adverse effects of a particular immunosuppressive agent.
TAILORING: MAKING USE OF WHAT WE HAVE
Even though we have great promise from newer immunosuppressive agents, an individualized use of drugs we already have available may enlarge our therapeutic horizon further. This presupposes two factors: (1) a thorough evaluation of all risks, including immunological risk due to donor, preservation or recipient-related factors and the recipient’s metabolic risk for new onset diabetes, hyperlipidemia and weight gain; (2) screening for circulating donor-specific antibodies with or without protocol biopsies or with more conventional renal predictors including proteinuria. Additionally, during chronic follow-up, the physician should carefully screen for adverse effects limiting tolerability of a specific drug class, keeping in mind that many of these side-effects may be dose-dependent. For de novo patients with high immunological risk, the current practice of giving induction therapy with a lymphocyte-depleting agent and a CNI-based triple therapy seems a logical choice. However, in patients with lower immunological risk the treatment regimens could be more diversified. For instance, in patients at higher risk for CMV or BK viral infections, or those not tolerating inosine monophosphate dehydrogenase (IMPDH) inhibitors (inhibitors of lymphocyte de novo purine nucleotide biosynthesis; i.e., mycophenolate mofetil and mycophenolic acid) in sufficient dosages, the synergistic effects of a calcineurin-mTOR inhibitor could be utilized to keep both drugs at a lower dosage. Clinical experience suggests - at least in European patients, - that a relatively low “combined target level” of 7-10 for tacrolimus-mTOR combination (whole blood levels of tacrolimus and mTOR inhibitor summed up together, both expressed in ng/mL) may provide sufficient immunosuppression while avoiding many of the adverse reactions associated with higher targets used historically. For those at risk for calcineurin-associated adverse effects including malignancy, mTOR conversion may be logical choice. Often such patients may not require high mTOR dosages and tolerate such regimens reasonably well. Patients with de novo appearance of DSA, especially combined with rising levels of proteinuria may benefit from a relatively higher level of maintenance immunosuppression, and preferentially CNI-based one. Conversely, patients on CNI-minimized regimens or after CNI withdrawal may benefit from close monitoring for DSA and proteinuria, given the data for a higher incidence of de novo DSA appearance in such patients[7]. Patients at higher risk for metabolic complications, such as new onset diabetes, may benefit from an IMPDH-based immunosuppressive regimen provided that a relatively high dose is well tolerated. Steroid sparing may be important in such patients, but this may need to be counterbalanced against the higher risk for acute rejections[5,6] that may or may not translate into higher antibody-mediated mechanisms later in the transplant course.
Emerging data on costimulation blockade-based regimens provide promise that a new alternative to CNI-based regimens may become available in centers that are able to afford the high costs associated with belatacept. Reports on five-year outcome data do indicate that despite a higher incidence of early acute rejections renal function and patient safety are maintained with belatacept and the incidence of post-transplant lymphoproliferative disorder remains acceptable, especially in patients that are seropositive for Ebstein-Bar virus at the time of transplantation[59,60]. Conversion from CNI to belatacept also appears to be possible without evidence for inferiority in terms of patient survival or graft outcomes[61]. Should belatacept become more accessible in the future, enough clinical experience may accumulate to define a role for this promising agent in patients with appropriate risk and safety profiles.
Finally, with emerging data emphasizing the importance of non-adherence[28], we should keep in mind close monitoring for adverse reactions. Early detection of a compliance-endangering side effect gives us the opportunity to tailor dose or to choose an alternative drug to accommodate individual susceptibilities or side effects.
CONCLUSION
In practice of clinical medicine, we often have to make the best decision based on less-than-complete information or in patients with multiple co-existing comorbidities; therefore, the concept of “evidence-based medicine” itself becomes a contradiction. Accordingly, when managing an individual side effect, complications and co-morbidities may take precedence over excessively rigid adherence to pre-existing pathways. Perhaps the time has come to abandon the “one size fits all” approach and to go beyond using rigid protocols in choosing the optimal immunosuppressive regimen for an individual patient. Potential areas of considerations are: (1) a thorough assessment of immunological and metabolic risk profile of each recipient; (2) screening for predictors of graft loss and early signs of antibody-mediated rejection with DSA, protocol biopsies and proteinuria (including close follow up of adverse effects with dose adjustments or conversions as necessary); and (3) increased awareness of the possible link between poor tolerance of a given drug at a given dose and non-adherence with the prescribed regimen. Altogether, these considerations may broaden our therapeutic horizon and makes possible the most effective use of the drugs we already have.
P- Reviewer: Cantarovich F, Sheashaa HA, Salvadori M, Tellis V S- Editor: Ji FF L- Editor: A E- Editor: Jiao XK