Computer Systems Validation Part 5 –
Risk Assessments

In this article I will discuss Impact and Risk Assessments.
In particular I will detail what has to be included, what can be left out, and importantly how to reduce the time and effort required at this stage, while still ensuring all compliance and business considerations are included.

The thing is, with Risk Assessments I could go ever deeper and deeper with “What If?” examples, and I guarantee that all validation practitioners will have come across something that gave them pause for thought on how to treat a particular scenario.

So, what I have endeavoured to do here is cover the key points, and have identified some of the more common considerations.

Of course if you really do get caught in a quandary, then my contact details are included at the end of this article – please feel free to get in touch!

Note for the discussion:

I’ve stated this previously, but it is worth starting this blog with the following (paraphrased) statement:

“Remember to keep in mind that the purpose of qualifying a system is to ensure the system ultimately manages the residual risks to patient safety – very much the approach described in the UK, EU and ICH guidelines.”

CSV Risk Assesments

It is all too easy to get drawn into the minutiae of the system details and completely lose sight of the goal. It is worth remembering that the primary principles of quality risk management as detailed in ICH Q9 are:

  • The evaluation of the risk to quality should be based on scientific knowledge and ultimately link to the protection of the patient. (Note: Risk to quality includes situations where product availability may be impacted, leading to potential patient harm.)
  • The level of effort, formality and documentation of the quality risk management process should be commensurate with the level of risk.

We need to consider Risk Assessments in three parts:

New Systems

So what are the typical risks for computerised systems?

For CSV, I suggest considering the risks in three different groups:

Risk Group 1 – the System functionality

Risk Group 2 – Standard Computer Risks

Risk Group 3 – the “Compliance” functionality

I’m actually going to discuss these in reverse order

Compliance functionality

And by “Compliance” functionality I mean the regulatory expectation for computerised systems, namely areas including

  • Electronic Security
  • Audit Trails
  • Electronic Signatures
  • Backup and Restore
  • Business Continuity

The reason I am starting here is that the risks on any given regulated system in the above areas are exactly the same as the risks on any other regulated system.

Let me explain. If we start with the premise that a regulated system has to have the expected functionality described above, then all we need to do is identify the inherent risks for each of those areas. These risks can then be “baked in” to our risk assessment (and by extension our qualification documentation templates) by default for any current and future systems without the need to re-assess each time.

This is a concept that many people struggle with initially, especially those familiar with other validation topics but new to CSV, or new to validation completely.

This is because traditionally risk assessments are performed using the Risk Matrix and FMEA (Failure Mode and Effects Analysis) methodologies, and these absolutely can be used for all CSV considerations. In fact the guidelines point to these as the typical approaches that should be taken.

What I am advocating is that if the same sub-set of risks are present in all systems, then just accept those risks are going to exist, ensure those risks are addressed within the software functionality testing by default, and move on to the “other” risks.

I have had many conversations with regulators, customers and validation practitioners regarding this approach which has always been accepted based on the information presented.

A final note on the reduced approach for these regulated system elements – this approach can only be taken if the elements themselves are 100% included in the subsequent IQ and OQ documentation. I choose to add them into controlled templates – and this ensures that I will always assess the risks, test the mitigation/control effectiveness, and ensure that the regulatory requirements are met.

‘Standard’ Computer Risks

By this I mean areas that would not be identified as regulatory risks directly, but would be common to most or all IT systems. Examples of this might include.

  • Power Loss
  • Data Loss
  • Data Corruption
  • Uninterrupted Power Supply (UPS) Provision
  • Business Continuity Management (BCM)

The approach for these can be exactly the same as for the Compliance Functionality – i.e. a set of standard tests / checks for each that ensure risks have been adequately addressed

We would then follow a set of steps to identify the overall system functional risk, and then the scope of the qualification effort and testing approach required.

System functionality

Firstly, we need to identify the GxP impact of the system itself. This was briefly discussed in the earlier Blog on Roles and Responsibilities.

This is done by asking seven Critical Questions for regulated systems:

  1. Does the system generate, manipulate, or control data supporting Regulatory Submissions?
  2. Does the system control Critical Processing Parameters (CPPs) or associated data used at any stage?
  3. Does the system provide data for Product Release?
  4. Does the system control data required in case of a Product Recall?
  5. Does the system control Adverse Event or Complaint recording or reporting?
  6. Does the system support Pharmacovigilance?
  7. Does the system manage any other GxP data or processes? (the catch-all safety-net)

Rather than these being a completely straightforward Yes or No, we need to identify if any parts of the system have a DIRECT, INDIRECT, or NO impact on GxP (choosing whichever is highest).

In all likelihoods, the only time a system will have a DIRECT impact on GxP, although not exhaustive, tend to fall only within a small number of system / functional types:

  • Batch Release to Market
  • In-process testing used for Batch Release
  • Material Testing and Release
  • Software as a Medical Device

All other regulated software then falls under one or more INDIRECT categories.

As a side note, it is important to repeat the assertion that validation is required for ALL GxP systems and not just GMP systems.

These are defined as follows:

This may mean having difficult conversations with your colleagues in Regulatory and Pharmacovigilance roles, and significant investment of your time to educate and support them to ensure their systems are compliant and qualified.

Of course if software is assessed as having NO GxP impact, then there is in theory no need to validate at all. Simply perform this part of the assessment, document it, and file it with the pharmaceutical quality system.

However, as I will keep repeating in these blogs, there are very good business reasons to at least take a similar structured approach to system implementation as one would for a regulated system.

Of course trying to get buy-in from management for this approach, especially where time and resources are tight, can be a different matter altogether….. but ultimately proving that any system consistently functions as intended should add value as well as reduce risk.

So, returning to GxP Systems, we have identified that validation must be performed.

The next step is to identify the “GAMP Level” – so called because these are the levels of application defined within the GAMP Guidelines, and are widely accepted as industry standards.

GAMP Level
Type
Description
1 Infrastructure Software Systems that link together in an environment for running and supporting applications and services
3 Non-Configured Product Off-the-shelf products where run-time parameters can be entered, but where the software itself cannot be configured
4 Configured Product Off-the-shelf products that are configured to meet the needs of the user processes
5 Custom Application System developed and coded to meet the specific requirements

Note – there is no Level 2. This level was removed some years ago from the old revision of the GAMP Guidelines, and rather than change the levels to 1 to 4, a decision was made to keep the existing terminology and just not use the Level 2 (previously used for firmware)

Once we have defined the GAMP Level we can actually rate the overall risk level of the system as follows:

GAMP Level
1
3
4
5
System Impact N/A Indirect Direct Indirect Direct Indirect Direct Indirect Direct
Check
System Risk *
LOW
LOW
LOW
MED
MED
HIGH
MED
HIGH

And in many ways, that is the main part of the Risk Assessment complete.

In order to fully define the validation and testing scope, of course, one cannot simply take the Risk above and apply a standard approach – there really is no ‘one-size-fits-all’ solution.

Rather we would also need to consider the above along with the associated GAMP Level ‘V’ Model, and the approach required for each. As would be expected, the higher the level, the more work is required (and this becomes particularly significant at GAMP Level 5).

I will talk in some depth and scope of required work based on the identified risks in the future blog on Validation Planning.

The Risk Assessment is not however complete, as there are a number of variables that must also be considered as part of the assessment – and these can ultimately change the Risk Level.

Variables will differ from system to system, and some time and thought may be required to fully consider all the factors.

Below are examples of the typical additional considerations:

Validation Status

It could be that the system has previously been validated within your wider organisation, this can be used as the basis for your validation effort – as the risks will not only have been identified previously, but the system should have already been qualified.

A formal Gap Analysis would typically be required in this instance as it will identify what you can use from existing validation and allow this to be cross-referenced accordingly. I will discuss this also in a future post on Validation Planning.

Vendor Validation Package

Many vendors will offer validation packages (and potentially at significant cost), and these can range from some blank templates to be completed by a customer, through to a package that has been executed by the vendor, or all the way through to complete automated validation testing and reports.

These are a sliding scale of usefulness for the end customer, but each would still need to be assessed to ensure all previously identified risks have been addressed. Just because a vendor provides a package does not automatically mean that it is 100% complete in scope or indeed actually fit for purpose.

I will discuss this further during the next post which will be on Vendors.

Electronic Records Identification

It is crucial to identify at this stage if a system will retain electronic records, especially as the creation, control and data integrity of these records is critical. However, there are many examples of GxP systems that do not.

For instance, a custom-built application might be used to control a blending machine, where the run-time parameters are entered by the user each time. The system would then run and generate a report printout at the end – but not create any electronic records.

Under the strict definition from GAMP the software would be classed as Level 5 as it has been custom built. So, as this is bespoke software, using the criteria stated above, the Risk would be categorised as HIGH. But logically this is a simple application that controls a blender.

We can therefore justify reducing that risk as no electronic records are being generated.

Where electronic records ARE a part of the system, we must ensure that the rules and requirements for these will ultimately be met.

I advocate managing this by ‘baking-in’ the design, checks and testing of electronic records within the qualification templates to ensure the process is covered by default. I will discuss this approach further in a later blog.

Electronic Signature Identification

If a system has Electronic Signatures (or ‘e-sigs’) generally the risk due to the processes involved would increase – there are very specific controls that need to be in place for e-sigs.

Verification of correct e-sig implementation and the associated controls must be in place in both the Installation and Operational Qualification stages.

I will discuss e-signatures further in the future post on Specific Regulatory Considerations.

Quality Systems Audits

The need to verify a vendor and their quality systems is not always as clear cut as “If x then perform an audit”.

A vendor of a GAMP Level 1 system would almost certainly never require an audit. And in fact, a vendor of a GAMP Level 3 system would most likely not either.

However, once we reach the GAMP Level 4, a quality systems audit becomes something that should be strongly considered, and at Level 5 this would typically be considered mandatory.

An additional consideration here is the system function – so where the system was previously identified as DIRECT, this will also increase both the overall risk, and the requirements for an audit.

Many vendors will not have a problem with being subject to a quality systems audit, especially those with an existing Quality Systems accreditation such as ISO9001.

In general a paper-based questionnaire for a Computer Systems vendor supported by a remote audit works just as effectively as an on-site-visit

There are potential risks here of course.

  • A vendor may not actually have any recognised quality systems / accreditations to audit against
  • A vendor may refuse an audit
  • A vendor may actually fail the audit
    • Or even worse, compound that by then refusing to accept any recommendations

I have first-hand experience of all of the above. These can and do present significant challenges, and certainly go some way to increasing risks.

I will come onto audits and auditors in a later post and vendors in the next post, but it important to remember that a good audit can significantly help with identifying and reducing risks.

Vendor History

Regardless of the audit status, a vendor with a long experience supplying to the pharmaceutical industry will provide a level of assurance.

No guarantees of course, but generally an established, well-respected industry leader would typically mean lower risk(s), and they should be able to provide all the supporting evidence and documentation required – if only because every one of their customers should be asking the same questions and requesting the same documentation.

Conversely, a company with little or no experience or understanding of pharmaceutical regulatory requirements could present significant risks in development and assurance. They are likely to require significant levels of support and education on GxP requirements.

Business Risks

Although not a consideration from a regulatory perspective, I would recommend that the potential business risks are also considered.

The most critical one of these of course is asking the question “what would the impact on the business be if the system failed / was unavailable”.

Cynics here may suggest that the wider business often will neither care about your system, nor even entertain considering potential issues until the bottom line (i.e. sales or finance) has been impacted. By which time it is too late.

And experience has indeed suggested this is exactly what can and does happen.

So I have always ensured that the impact of the system failure on the wider business is considered every time.

And after all that we finally get to the final Risk Group – Group 1 – The System Functionality.

In many ways there are no practical risk assessments to actually perform at this stage. The system either functions correctly, doesn’t function correctly, or doesn’t function at all.

It would be a waste of time to go through every line of functionality on a Risk Assessment, so unless there are specific or critical processes we wish to highlight at this stage, we can simply conclude that system functionality will be verified as correct under functional testing / qualification.

In other words, regardless of the risk, if it is considered GxP, it gets tested.

Updates to Existing Systems

Up until now I have discussed the risk considerations for new system implementations. But what about updates and changes to existing systems?

In many ways a substantial update to an existing system can be a more challenging exercise. This is because when updating from version ‘a’ of a system to version ‘b’, we have to review all the change logs / release notes. Line by line.

This is fine when the proposed changes are few in number and easy to understand, but the task can become much more difficult when:

  • The changes are significant in number
  • The changes are written in a technical manner such that they cannot be understood by anyone except the developer
  • The vendor provides incomplete release notes, or does not provide any notes whatsoever

With the first issue there is nothing for it but to go through the mass of updates and pick out any that may have an impact on the GxP processes, business processes, or the existing procedures.

I would normally record each change on the Risk Assessment with a statement of what the impact (or no impact) specifically is. This can potentially lead to a long document requiring significant time and effort to complete, but it does provide assurance that everything has been properly assessed.

Unfortunately, I am not aware of a better way of doing this. If anyone reading this has a more pragmatic solution, I would be very keen to hear of it!

With the second issue, you may need the help of the vendor to understand what some of this means (good luck!), or involve the Translator.

If you cannot properly assess the potential impact(s) of the changes, you may have to state the increased risk and accept you will have to widen your validation scope accordingly.

Finally if you have not received a document describing the changes, and polite enquiries, formal requests, and desperate pleading to the vendor have all proven fruitless, you may have little choice but to treat the project as a new system and work from there.

The Horseman of Conquest beats you again…

Note – Where the update to the system is so significant, for instance where you are moving from a version that is significantly “out of date” to a brand new version – and there have been many updates in that time, it may just be easier and quicker to treat it as a new system anyway.

Typically changes may impact on different areas of the system, and this can impact on the risk profile.

For instance where updates are important but restricted to ‘back-end’ or ‘quality-of-life’ type updates, these can all be considered GAMP Level 1 changes.

If your system has previously been identified as GAMP Level 4 you might not have to qualify to that level again at this time – in this instance you might be able to just treat as Level 1 and assess the impact on procedures (for things like changed screenshots etc).

Also be mindful when the system updates actually change the GAMP Level the other way – i.e. they increase the GAMP Level.

This can occur when a non-GxP system gets some new GxP processes, but most likely will usually only happen where a Level 4 system has some specific bespoke customer enhancements. Again, assuming the output of your risk assessment justifies it, you probably don’t want to have to treat the system as a full Level 5 application as that would add significant extra effort and resource. Instead, simply identify the modified functions/processes, and treat only them as Level 5 – with the rest still being at Level 4.

NOTE – As a BIG word of warning here, be very careful with this type of setup where an organisation has some specific enhancements on an off-the-shelf system, as when the system is updated using a standard patch, the enhancements can be lost and or the functionality impacted.

I have first-hand experience of one particular system that would lose critical bespoke functionality every single time the system was updated, requiring the vendor to then come and put the enhancements back in again (and hence additional validation resource was required to verify the process again).

So generally with system updates we would hope that the vendor has tested the changes robustly to help reduce our risks, and this includes regression testing.

Essentially regression testing is used to ensure that a change/fix to one area hasn’t unintentionally broken a function in a different area.

With this assurance we simply plug in the Level/Type of the changes against the Direct/Indirect/Non-GxP nature of the system and identify the risk from there:

 
System Impact
Change Type
NO IMPACT
INDIRECT
DIRECT
Change is on the Supporting Systems

i.e. Operating System, Drivers, Middleware etc.

NONE

LOW

LOW

Change is limited to non-GxP updates and / or bug fixes only, with NO new GxP functionality

NONE

LOW

MEDIUM

Change is limited to a single, previously qualified, function *

LOW

MEDIUM

MEDIUM

Change is limited to a small number of previously qualified functions, or minor new GxP functionality *

*

MEDIUM

HIGH

Change is significant, or adds major new GxP functionality *

*

HIGH

HIGH

* These may or may not include additional bug fixes.

System Retirement

Risks must also be considered when retiring or replacing a system, as the data integrity absolutely has to be maintained for regulated data.

I will talk specifically about System Retirements in a future blog, but for now consider that all regulated data has to remain complete, correct and readable for a defined retention period. This is not always easy, especially if the original application used to read that data is no longer available…

How do we apply the risks to the validation scope?

So the next question would be – what range of testing do we need to do against each risk level?

This will be covered in a future post on Validation Planning, but the quick answer is by using a set of typical test regimes and performing specific testing where applicable.

Control Systems

Mention should be made of control systems – these are electronic systems that are either integrated with equipment or are used solely for the purpose of running that equipment.

It is likely in this scenario that the validation scope will be focused on the equipment rather than the software, but the same risks apply for the software as for any other software. Therefore, the validation team should ensure that this is taken into account when taking a holistic system approach to the risk assessment.

Flowcharts

As an addition to the discussion, the use of flowcharts can be particularly helpful in the Risk Assessment as they help identify the critical GxP Processes, where electronic records are created/modified/deleted, and where electronic signatures are required. They are particularly useful where the processes involved are complex or cross-over between other functions.

Below is an example flowchart for a simple Artwork Change process:

CSV Risk Assessment Flowchart

With a flowchart such as this, not only can we quickly identify the critical processing steps, but as each different type of system element is clearly identified the risks can be easily and quickly identified.

Conclusion

As we have seen the main Risk Assessment itself can actually be relatively straightforward, with the level of effort commensurate to the risk(s) of  the system and ultimately to patient safety.

However, to ensure that an appropriate risk assessment has been performed, there are a number of key considerations that must be evaluated and these are dependent on the system and lifecycle stage.

In the next entry in this CSV series, I will discuss Vendors and how they are key to the success (or otherwise) of any collaborative project.

To find out how we can help your organisation with your Computer Systems Validation and Data Integrity needs please get in touch at consultancy@jensongroup.com or to read more about our CSV services, please click here.

Read all posts in this series:

Neil Rudd
2024-05-01T15:27:13+01:00March 7th, 2024|
Go to Top