<menuitem id="cjZ6"></menuitem>
<progress id="cjZ6"><dl id="cjZ6"></dl></progress>
<dl id="cjZ6"></dl>
<thead id="cjZ6"><thead id="cjZ6"><del id="cjZ6"></del></thead></thead><video id="cjZ6"><strike id="cjZ6"><noframes id="cjZ6">
<noframes id="cjZ6"><span id="cjZ6"></span>
<progress id="cjZ6"></progress>
<dl id="cjZ6"></dl>

On the safety of EHRs

The subject of EHR safety is back in the news – probably driven by the AMIA meeting last week:

“I remember internal conversations where we talked about ‘What is the equivalent of a plane crash that is going to get the attention of people?’” said Reider, who now practices family medicine in upstate New York. “‘Is it going to be a congressperson’s relative is harmed by health IT that causes the attention to shift?’ I would offer that still hasn’t happened yet, but someday it will. And gosh, wouldn’t it be a horrible thing that we have to wait for that to happen?”

from https://khn.org/news/no-safety-switch-how-lax-oversight-of-electronic-health-records-puts-patients-at-risk/

What the article doesn’t discuss – which is actually a remarkable omission – is that the issue at heart is not technology, and not even the software under question.

That’s actually evident in the two solid instances of harm offered in the article:

KHN/Fortune examined more than two dozen such cases, such as a California woman who mistakenly had most of her left leg amputated because the EHR sent another patient’s pathology report indicating cancer to her medical file. A Vermont patient died after a doctor’s order to scan her brain for an aneurysm never made it from the computer to the lab

Both of these are errors at the perimeter of the EHR – they involve not technology but information systems, interoperability, and actually sound like the heart of the problem could be lack of a common patient identifier.

That’s the thread running through through all this – the EHRs are just one cog in a fiendishly complicated system. Since they are the central system of record, any mistake anywhere in the system ends up as an error in the EHR.

Of course, there are are real errors in the large systems – there always is. And the systems are highly configurable, which is it’s own fertile source of errors. But most of the errors I’ve seen arise at the boundaries of the systems – our fragmented information systems are a natural representation of our fragmented healthcare system.

One easy knee jerk reaction to this is to regulate EHRs as a medical device:

The bipartisan law, which speeds up approvals for some medical therapies, states flatly that electronic health records are not medical devices subject to FDA scrutiny.

There’s a place for regulation, but treating systems of systems as a device, and applying language and systems meant for devices… I don’t think that’s going to help.

On the other hand, the safety of the healthcare system as a whole, as supported by it’s information systems: we need to take that seriously.

Some longtime EHR safety advocates say they have all but given up hope for consensus on any system that would investigate and share findings from adverse events, as happens in other industries, like transportation and aviation.

And collecting information and sharing findings would be a great way to start. But I’m not holding my breath.

Patient Innovator Track at #FHIR DevDays

A new phenomenon at DevDays is the Patient Innovator Track. This track shows that patients are the ultimate beneficiaries of FHIR. The Patient Innovator Track provides a stage to patients who have taking control of their health by using the data of their disease and their treatment, or app developers who enabled patients to do so.


The Patient Innovator Track takes place on Wednesday November 20th, the first day of the event. During 10 minutes’ pitches in the plenary room, the participants get the opportunity to demo their achievements.


Applicants first compete for an invitation to DevDays. Those who are invited, compete for the Patient Innovator Award. A jury will select the best presentation from the participants. The award is a contribution in kind by a sponsor. This could be free tooling or cloud services, developer or consultancy resources, space in the app store, promotion etc.

Why apply?

Why would you even consider applying for the Patient Innovator Track?

·       Get connected to the FHIR ecosystem and learn about the specification that’s transforming the industry

·       Reach a community of health IT professionals potentially helping you to take your solution to next level

·       Connect with EHR vendors and discuss integration opportunities

·       Discuss with forward looking medical professionals on how to embed the solution into medical practice

Who should apply?

The Patient Innovator Track is intended for patients who have applied information technology to get better insight in their disease or treatment, to take control of their health, or to improve the quality of their life. Developers who have built patient facing apps with the same goals, are also invited to apply, as long as they have active patient engagement in building their product.


The jury reviews applications based on the following criteria:

·       Direct impact on the patient’s health or treatment

·       Empowerment of the patient to take control of their health

·       Applicability for other patients

·       Proven usage in daily life

Your application can be related to an app, or a hardware device producing data. Your app or device does not need to be FHIR-enabled (yet). You need to be able to give a live demo of your product.


Submit your application by filling out the form before October 1st, 2019. We have room for four participants. The selection will be announced on October 8th, 2019.


We are looking for sponsors to help us make a success of the Patient Innovator Track. Specifically, we are looking for two types of sponsors (sponsors who take on both roles are welcome!):

1.       In-kind sponsors, who commit resources to help the winner take the app to the next level. This could be individuals as well

2.       Financial sponsors, whose support will help us cover travel costs for the patients and free admission to DevDays.

In-kind sponsors may also come forward during the event. Note: we cannot guarantee the in-kind sponsors upfront.


Jury members of the Patient Innovator Track are Mikael Rinnetm?ki, Grahame Grieve, Dave deBronkart (“e-Patient Dave”) and representatives from the sponsors. The track lead is Mikael Rinnetm?ki.


There are no costs involved in participating in the Patient Innovator Track. You will have free access to DevDays. Travel and stay for individual patients are covered by DevDays (not for commercial companies participating in the track). Ask for the details.


For enquiries about the Patient Innovation Track, please contact Marita Mantle-Kloosterboer at devdays@fire.ly.

Future of Secure Messaging in Australia

The Australian Digital Health Agency is working hard on a secure messaging project. This is a project to build out a working eco-system so that any clinician can send documents and messages to any other clinician in a secure fashion.

There’s multiple uses of the this kind of messaging:

  • Sending Diagnostic Reports to the requesting physician, along with anyone else the physician asked to be notified
  • Sending Discharge Summaries?
  • Referrals from GP to specialist, and letters back to Specialists
  • Submission of forms to hospitals (usually a form of referral)
  • Requests for further information from other clinicians

Some of these are well in production and have been for many years. Yet clinicians still report being overwhelmed by paper from their faxes, and that’s generated an “Axe the Fax” campaign (that phrase, btw, appears to have come from UK).

About Healthcare and Faxing

Note that the healthcare system is effectively responsible for keeping the fax industry alive (here?and?overseas). By some estimates I’ve heard, healthcare buys 80% of new faxes (but I can’t find any reference that says that).

Why is faxing so enduring in healthcare in particular?

Most of the commentary focuses in bad incentives around sharing information. I’m not sure that makes sense – ‘we are sharing information by massive volumes of fax because we have no incentive to share information’??I feel like that sentence doesn’t hang together.

The first and proximal reason that faxing is ubiquitous in healthcare is because email is not regarded as secure, and so all the work that’s moved to email in other industries hasn’t moved for health care. 

“You can’t send sensitive healthcare information by email” (Nathan Pinskier)

And the work to get email secure is really signfiicant – no other industry has done it to my knowledge (except the military on closed networks that they build at considerable expense – maybe we should build a healthnet?)

Not that faxing is actually that secure – it’s only as secure as the telco network (not much more secure than email these days – now that email has improved a lot, mainly by reducing the hop count) and the sender’s ability to get the destination fax number correct (see case in the?Nathan Pinskier article). I once was part of an investigation into a Queensland premier’s diagnostic report accidently being faxed to a news agent because a GP didn’t notify the diagnostic service when his number changed.

What we need is a secure alternative to email – some alternative protocol that allows for messages to be sent reliably and securely between healthcare providers.

Sending Messages Securely

Of course, here in Australia, we have one of those. SMD “Secure Message Delivery” is an Australian Standards (AS 5552) finalised back in 2010 that specifies how software acting on behalf of any healthcare provider can send a message to any other provider.

But in practice, providers can’t actually do that. They can easily deliver to some participants, but not to others. The government has focused quite some attention on digital health initiatives over the last decade, but none of them have focused on this very day to day operational problem in healthcare, and things haven’t been getting better.

The reason it isn’t just working as desired is not related to the protocol, but to the surrounding support – certificate management, registry administration, and implementation issues. It’s a really simple question – can you find the address of the person you want to send it to, and then encrypt it for them, and send it, knowing that your messaging provider will actually be able to send it? – and that the cost of getting all this to just work will be paid somehow).

Readers should be clear, btw, that whatever else is a problem here, a real part of the problem is the healthcare system’s very conservative approach to technology. There’s no reason at all  for GP practices to be buried in paper – just use a fax server that turns faxes into incoming digital documents (email etc.) internally – it’s cheaper and more convenient, and you can just do it – so why not?

Secure Message Project

Nevertheless, there are real adoption barriers here, many that are really linked to lack of incentives. As a result of stakeholder feedback, the Digital Health Agency initiated a collaborative program with the relevant clinical record software and message delivery vendors, along with some stake holder representatives, with the mission to identify and any reasons that were blocking progress.

That program is still working. (Btw – I consult to the Digital Health Agency on this program a little).

The program is still going ahead, slowly (well, more slowly than the stake holders desire, but not more slowly, so far as I can see, than these programs usually take). It’s generating press recently?(see?here, and?here).

In the last few months, I’ve had a few people express to me the opinion that this program is not making a good investment. The source of this concern comes from architectural and protocol concerns.

Secure Message Delivery Protocol

The SMD protocol is a SOAP based protocol that is designed to do reliable and secure store and forward where the ultimate destination may be unavailable – on the basis that the end point – the clinical desktop – is not available for direct delivery.

That’s yesterday’s technology – why, they ask, are we investing in it?

It’s true that industry has largely moved on from SOAP. We certainly would not use SOAP if we were starting on that specification now. In USA, starting at about the same time, they chose to use a secure email based approach, called Direct, now looked after by Direct Trust). We would certainly chose some approach like that  now, unless we choose an even more direct delivery approach.

The other major factor that is changing is the “not available for delivery” characteristic – the store and forward mechanism that the SMD design is based on is increasingly unnecessary as more and more processing moves to the cloud, where the recipient end points are always available.

In the Cloud

When the Secure Message Program started, industry trends suggested that adoption of the cloud would be very slow – over a decade, if not longer (partly because of the extreme conservatism of healthcare cited above, but also due to security concerns).

However during the time of the program, cloud adoption has accelerated dramatically, and many vendors have already deployed or are in the process of deploying cloud agents that act on behalf of the clinical system, irrespective on whether the system lives in the cloud or on the desktop). Given that environment, it seems as though investing in a store and forward based technology is investing in the past.

A direct delivery protocol – where clinical systems delivered directly to each other by some web service mechanism (there are several obvious candidates) is not only simpler, but can also be more reliable since the receiver can respond directly to the sender with regard to any business issues with the content.

So should we abandon SMD and work on a newer protocol instead?

Tomorrow vs the future

The first issue with this question is that, as I said above, the protocol is not the problem – it’s the certificates, the registries, and the business models around the message delivery that are the problem. So merely changing the protocol… how does that help? 

Well, it might make a difference, inasmuch as it could change the business realities and the considerations around certificates and registries etc. Though those are still the hard problems that need to be solved, and that the Secure Message Program is currently working on solving. So any work done now will at least be empowering and useful for any new protocol in the future.

The real question, though, is whether we should be aiming low, with lower risk, or aiming high, and taking on more risk. That’s a value question, really. Though I think that most people will say that the we should demonstrate that we can walk before we run.

Still I have some sympathy for the idea that we should invest in the future. It’s not rocket science to realise that in the future, everything will be based on web protocols, and that we’ll be running mix of 

·         Pull (providers access information when they want it)

·         Push (providers send information or notifications to other providers when they need to know about it)

·         Subscription (providers ask to be notified about specific events as they happen)

Nor is it much of a stretch to think that you should use one common exchange framework for all 3 of those, and that our best current choise for that is?FHIR.?

I’m not sure how we should consider this question – I’m sure there’s some out there who want to make the transition now, and not invest in SMD anymore. And I’m sure that there’s some out there who think we should just get SMD right and stick with that. I hope that there’s more who think that we will have to make the transition eventually, but aren’t sure when.

#FHIR DevDays: Smart Scopes

One of the subjects we discussed at DevDays in Redmond Seattle (2019) was starting to plan what changes to make for scopes in v2 of the Smart App Launch specification.

The scopes are used as the language for the user, the resource owner, and the application to negotiate how much access the user wants to grant to the application.

The current scopes are fairly simple:

In principle, these are simple: read and/or write resource to all or particular resources, and do so as the patient, or as a regular system user (reading the syntax backwards there).

We’ve long known that there’s problems with these scopes:

  • The scopes are too fine grained for some resources – the user isn’t interested in differentiating between (e.g.) DocumentReference and DocumentManifest
  • Some of the resources don’t make sense as as a scope – List, for instance
  • Some of the resources need a finer grained scope – particularly Observation. Users don’t grant access to “Observations” – they would think in terms of Vital Signs, Labs, Clinical Measures
  • All of that is assuming we are talking about clinical users. Patients would generally think in functional terms – ‘all my records except for my X”, where X could be something like:
    • contact details (e.g. in the case of domestic violence)
    • STD history
    • Mental Health
    • Substance Abuse History
    • Pregnancy record history (newly a concern in USA)

The problem with the last set of choices is that we don’t know how to compute those kind of choices – so the APIs can’t enforce them. That’s work in progress.

So we have plenty of open issues, and it’s now time to start working on them. Some outcomes from our discussion at DevDays (credits to Isaac Vettor from Epic for the notes and running the discussion):

Next steps:

  1. As a research step, collect and share the different production, SMART authorization servers’ user-facing description of scopes. How are production AS’s describing SMART scopes to users now?
  2. Simply adding in resource categorys to our existing SMART syntax may quickly and simply alleviate a lot of the pain. Why not simply toss the resource category’s name into the scope. e.g.
  3. Somewhat related: specify a method for an OperationOutcome to specify the scope that would be required to resolve an access denied error to enable a progressive authorization approach. (a la enabling a HATEOAS approach).
  4. While SMART already describes a method for authorizing non-FHIR scopes by prefixing the scope with __ (two underscores), it doesn’t explicitly describe how to describe access to FHIR operations.
  5.  Is there something we want to do with confidentiality classification? E.g. set a baseline confidentiality across resources (“normal”?), and if an app wants to access higher levels (e.g. “restricted” or “very restricted”) there is an additional scope that needs to be requested?

If you’re interested in follow ups to this – see Zulip

Btw, several implementers have asked me about the problem that the Smart App Launch scopes are not fine grained enough to give proper security control over the user’s access to the resources and services provided by the server.

The scopes are not intended to be a language used to manage access control for the user; they are just a language for the user to authorize the application, and they can’t grant the application any access that they themselves do not have on the underlying system.

Note that we’ve talked occasionally about extending HL7 standards to cover security provisioning, but vendors are by and large uninterested (it’s too invasive for system design, and too hard to get perfect, and it’s an area that demands perfection). So we have no technical standards, though we have published a standard describing ideas for standard roles in the past.

Spreadsheets for #FHIR Resources

One of the ubiquitous uses of Excel (or spreadsheets more generally) is for mapping purposes.?

To help people with spreadsheet mapping exercises, I’ve published the following files:

I also added this to the ci-build at?http://build.fhir.org/definitions.xlsx.zip, so it will be present for all future versions of FHIR.


These spreadsheets are to help people who want to do spreadsheet based mapping, as a starting point.?

Note that spreadsheet mappings are limited in scope. In a typical mapping exercise, 90-95% of the elements map fairly straight-forwardly, with perhaps some code mapping for things like (1|2|9) to (M|F|U) – these things can reasonably and easily be expressed in a spreadsheetsand it’s the? easiest approach. But the other 5% typically involve structural re-arrangements, and/or decisions about managing specific instances in the target (new resources, when to create new items in a repeating list etc) – and spreadsheets aren’t an appropriate way to try to express these things.?


Hard #FHIR Safety Problem: Synchronization

It’s a common thing for implementers to want to do with FHIR: connect to a FHIR server, and make a local copy of the information provided by the server, and then check back occasionally with the server for updates – that is, new resources, or changes to existing resources. (In fact, it might be argued that this is the thing that FHIR is for, based on actual usage).

This simple sounding intention turns out to be very difficult to get right, for all sorts of really important reasons. And that’s a real problem. 

As an example, let’s assume that you want to maintain some kind of personal patient copy of their summary – meds, problems, allergies, care plans (e.g. using the Argonautinterface). We’ll focus on medications, but the problems are the same for any kind of resource that you want to synchronize like this .

The usual way to start these things is for the client that will keep it’s own copy to get authorized to use the server (say, using Smart-App-Launch) and then do a query like:

GET [base]/MedicationStatement?patient=X

where [X] is the id of the patient obtained from somewhere (exactly where is out of the scope of this log post). This call will return a list of  MedicationStatement resources in a Bundle.

MedicationStatement 1234 v2: status=active, code=phenytoin
MedicationStatement 2346 v1: status=active, code=salbutamol
MedicationStatement 6234 v1: status=active, subject=warfarin

Note that the MedicationStatements may not include all the relevant medication information; implementers have to check whether MedicationRequest(/Order) and Dispense/Administration resources have other critical information, depending on applicable FHIR IGs, system behavior, clinical context etc. We’d love to nail this down, but that’s not the healthcare system we have right now

So the client gets back this list of resources, stores them in it’s own storage somewhere, and then displays them to the user. Note that the medication statements that come back have a version (how many updates there have been on the server), and the all important status:

activeThe medication is still being taken
completedThe medication is no longer being taken
entered-in-errorThe medication was entered by mistake
intendedThe medication may be taken at some time in the future
stoppedThe medication was stopped being taken
on-holdThe medication is not being taken right now
unknownThe status of the medication is not known
not-takenThis medication is not being taken (e.g. “I never took X”)

Feedback from the EHR vendors reviewing real production apps presented to their app stores is very strong:

Many Applications are ignoring the status code and not displaying it, and not checking it.?

That’s a shocking unsafe practice. And the statement even applies to experienced healthcare developers. The status code is critical always forever, and applications can never ignore it (just how important it is will become clear further down). First point for implementers: always check the status. We’re open to ideas about how to make it more likely that implementers will check for status – but since everyone ignores the safety check list?already, it seems unlikely we can do more; others will have to take up the gavel on this one.

Note: under US regulations, EHR vendors can review and approve apps that institutions use, and force apps like this to get status information correct. But EHR vendors cannot review patient applications. It’s an open question who is responsible for this, or who can do anything about it.

Btw, readers will have noticed the absolutely awful status value of ‘unknown‘ in the list, and are probably wondering just what an application is supposed to do with a medication of unknown status, and why we even added that to the FHIR standard. Well, welcome to the wonderful world of healthcare records, where critical legacy records that cannot be lost in history also are so unreliable that you don’t know what they actually mean. No new record should ever be created with a status of ‘unknown’, but that won’t stop them existing.

So applications should always display and check the status – but we’ve got a lot of problems to deal with yet in this post. 

Our client now has those 3 resources in it’s local store, stored against the id from the server. Some time later, it performs a follow up query:

GET [base]/MedicationStatement?patient=[X]&_

where [Y] is the the timestamp on the http header from the last call. This call will returns a list of updated medications – anything that has changed since the last call.

MedicationStatement 1234 v4: status=completed, code=phenytoin
MedicationStatement 7485 v1: status=active, subject=vancomycin

At first glance, this seems clear: there’s been an update to 1234- the patient is no longer taking phenytoin, and now they have started taking vancomycin. The client adds the vancomycin to it’s persistent store, and updates it’s own copy of the MedicationStatement 1234.

Note, I’m assuming here that the resource ids of the medication statements are preserved – they SHOULD be for all sorts of reasons, but not all systems do. Those that don’t: the consequence is that there’s no way for any application to maintain a copy of the information downstream (a particularly subtle and pernicious way of information blocking). 

So that’s all good, but what about MedicationStatements 2346 and 6234?

Well, we presume that they haven’t changed since the last query. So what if we didn’t time limit that query, and just made the original query again? The client might assume that it would get all 4 records, but it might not. Say the client did the original query again, and got 

MedicationStatement 1234 v4: status=completed, code=phenytoin
MedicationStatement 6234 v1: status=active, subject=warfarin
MedicationStatement 7485 v1: status=active, subject=vancomycin

What happened to MedicationStatement 2346? It hasn’t been returned – why? and what should the client do about it? Should it remove it from it’s persistent store or not? 

Here’s a list of the more likely reasons why the record might not have been returned:

  • The source resource(/record) was hard deleted from the origin server. And so the client should also delete it? Analysis: it’s a bad practice to hard delete records that prior healthcare decisions might have been made from, or that might have been distributed. That’s what we have entered-in-error for: to make that something was removed, and be able to distribute it. But of course, you guessed it – lots of systems just hard delete records when asked
  • The record was marked as confidential on the server side, and policy is not to provide access to confidential information across the API.
    Analysis: this is just a hard problem. Security isn’t going to go away, and can always create problems like this. The resource will no longer be available, period. And it’s computationally hard to recognise that a change somewhere in the security system means a resource is no longer available to a particular client that already accessed it
  • The record was created against the wrong subject, and that’s been fixed by simply changing the subject. The resource is no longer available for this patient. Analysis: this is a variant of the security problem, since security is by patient for patient access. Like deleting records, this is also bad practice (for the same reason). Applications should mark the old record as ‘entered-in-error’ and create a new correct record. But you guessed it… most don’t
  • The system may only be returning records with status active or completed unless the client specifically asks for other status codes (or even only active, unlike this example). Analysis: Some in-production Argonaut servers do this, in response to the status problem described above. We would rather they didn’t do this, because it creates other problems – see the long discussion on the subject – but the problems this is addressing are real and serious. This situation can be rectified by the client by performing a GET [base]/MedicationStatement/2346 and seeing what is returned)
  • The portal providing the API service may have
    ?(temporarily?) lost access to the underlying record store from where MedicationStatement 2346 came from. Analysis: portals are often facades over much more complex underlying systems, so this is a real world possibility. There’s no ideal solution other than to pretend that it will never be a problem
  • The record may no longer available to the production system if enough years (5-7 or longer) have passed (and most EHR systems are at least that old, if not much older, no matter how recent the FHIR interface itself is)

Note: There’s another bigger deal here – patient records may be merged or unmerged which significantly complicates matters. Every system handles this slightly differently, and applications that maintain their own persistent store for more than one patient cannot ignore this – records may be moved, or come and go, and they just have to keep up. Of course, many/most don’t do anything about patient record merge/link.

So: any client that it’s keeping it’s own persistent copy of data accessed by a FHIR Search like the above has to consult with each server to find out which of those possible reasons are applicable for the server, and decide what to do about it.

Applications that don’t… are unsafe and shouldn’t be written, marketed, or used. Of course, the users have no way of finding out how an application handles this. Maybe someone will take up the task of reviewing patient apps for safety, but it will be terribly difficult (=expensive) to do that.

I wish there was better advice I could provide, and we’re working on getting better advice, but the problem is the trillions of dollars of legacy systems out there that healthcare providers don’t seem to be in any hurry to replace (with systems that don’t yet exist, mind).

The problems discussed here are already documented in unsafe outcomes for patients in production systems that access EHR clients using the FHIR interface (and in related forms they have been a documented problem for national patient record systems too)

Note that there are other ways for a client to synchronize data: 

These offer different APIs with different strengths and weaknesses – but since the underlying issues are record-keeping policy issues, not API surface issues, these generally don’t make much difference, though we are working hard on the subscription route to make it possible to deal with some of these issues (more on that in a later post; and we probably will never resolve the security related issues).

Security Appliances and FHIR Servers

The FHIR Standard doesn’t say much about security. Given the critical importance of security for healthcare data, readers are sometimes surprised by this. There are, however, many different valid approaches to making a server secure, so the FHIR standard delegates making rules about security to other specifications such as the Smart App Launch Specification.

Note that there are many aspects of security, of which the most important are:

  • resistance to malicious actors – firewalls, basic security discipline
  • authentication
  • authorization
  • access control

For the purpose of this post , security means ‘exercising control over which queries are allowed, and what information they return’.

When explaining about security, the standard includes the following diagram on the security page:


Security can be applied:

  • in the client
  • between the client and the server
  • and inside the server itself.

In most real world applications I look at, some security will exist in all those places.

On the client

The least important place for security is on the client – though since the client does have access to data, security does still matter (particularly in regard to side-channel attacks).

Initially app developers don’t bother about security, and assume that the infrastructure will look after side-channel attacks. But when you don’t worry about access control, you can get hard error messages that look like bugs in the application. Since developers are motivated to avoid these, they end up applying security on the client.

Of course it’s a very bad idea to rely solely on the client to solve all your security needs.

Between the client and server

In this approach, there’s a fa?ade server between the client and server that focuses entirely on security. These fa?ade servers are often called a ‘security appliance’. The security appliance checks that the requests coming from outside are valid and applies authentication / authorization / access control, and then passes the request on to the actual FHIR Server, still as a FHIR Request. Then it inspects the response and filters the returned information against the security policy before returning it to the original client.

Because of the importance of security, most real world applications use a some kind of security appliance. At the least, the security appliance will perform perimeter tasks like preventing obvious intrusions. But the appliance can do a whole lot more than that – it can authenticate the user, handle OAuth authorization/certificate validation, and apply access control to the requests and responses.

Using a security appliance like this is a standard part of a Defence-In-Depth strategy.

A security appliance is not enough

However, real world systems also need to implement access control into the FHIR server, because of the way FHIR works.

As an example, take the situation where the authenticated user is not permitted to see episodes marked as Mental Health (either using a particular encounter type code, or a security label), the patient at hand has 2 episodes, a normal admission (a-n) and a mental health admission (b-psy), and the appliance is enforcing this policy in front of a general purpose server that has no information about the user or their permissions.

For a request such as

GET [base]/Encounter/b-psy

the appliance will see the that the response is marked as a mental health encounter, and change the response to a 404 Not Found with an appropriate error message. When it gets a request list

GET [base]/Encounter

the appliance will see that the response contains 2 encounters, and it will remove b-psy from the list and set the count of responses to 1 instead of 2. So far, all good. However, consider a request like:

GET [base]/Encounter?class=inpatient&_summary=count

Enforcing the user permissions on this request – a simple request to count the inpatient encounters – a simple join on the server – now depends on information that is not explicit in the response, so the appliance can’t apply it to the response. In order to enforce the policy the appliance must perform a full search on the encounters, determine which meet the policy, and then return the count.

In practice, the FHIR standard includes many search parameters and other?features (operations, reference resolution) that make the security?appliance’s task infeasible – to make the queries work, a server must be aware of the access control rules when it iterates it’s indexes etc.

A security appliance that cannot depend on the FHIR server to implement access control will end up prohibiting most of these queries as unimplementable,?but they are standard features that are common / necessary for clients to use to deliver effective user experiences.

Note that there’s another important consequence of applying all the security on the appliance: the server does not know the user, and can’t record?the user identity – a key fact – in any audit trail it generates.

Integrated in the server

For these reasons, most real world applications end up enforcing access control?in the server that performs the actual work of the handling the FHIR request –?resolving references, iterating internal indexes, etc, and the security?appliance is mostly used for perimeter security.

Most of the production servers deployed today use the Smart App Launch Specification – the FHIR Community’s standard profile for using OAuth – as their primary security approach. This is a great solution for user level authentication/authorization, but doesn’t yet provide the classic?B2B security connections with system level trust that the healthcare community is mostly used to.

The Smart App Launch spec reinforces the importance of server side integrated security?by not describing a standard interface between the?authorization server and the resource server. This makes it natural to implement strongly coupled Authorization Servers (AS) and Resource servers (RS), and more generally, strongly coupled security systems. Note that this is not at all required – standard interfaces between RS and AS are?allowed, but the absence of an accepted way to perform the decoupling encourages a strong integration of?the security inside the server.

Note that the only current candidate?for a full open standard between the Authorization and Resource Servers is the?UMA/Heart specification which does a whole lot of other things, and hasn’t attracted much interest by the community. A lighter weight approach is for the authorization server to offer token introspection, so that a resource server can query the authorization server for details about the authorization. However both of these approaches are limited to expressing constraints that can be expressed using scopes and resource sets, while real security systems may require a richer language to meet requirements.

Mixed deployments

A single integrated server that includes all the security features internally is fragile in other respects. In practice, servers like this are hard to manage in terms of upgrades: security upgrades can be required very quickly in response to newly discovered issues, while the application side typically requires a great deal of testing prior to upgrade. In addition, closed server systems can be hard to adapt to shifting and diverse business requirements around the FHIR server.

This is driving interest in the community around deploying a mixed security system – using a mix of both security appliance and secure server. But to make that work, the two systems have to work together.

openId Connect

The first obvious approach for integrating appliance and server is for the server to collaborate with the appliance to enforce the join/integrity rules that are hard for the appliance while leaving all the rest of the security to the appliance. The appliance trusts the server to enforce the appropriate rules and the server trusts the appliance to correctly identify the user etc by whatever method is appropriate for the business.

In order to make this work, the security appliance as to communicate the details of the request to the FHIR Server. The most obvious way to do this is to pass a jwt in the Authorization header in the request from the security appliance to the FHIR Server. The jwt needs to communicate at least a user identity – for which the natural choice is to use openId connect tokens, though additional details around roles, groups, and authorizations may be required.

Obviously this approach requires trust, which would be established by contractual relationships. In addition, it requires a technical specification around the use of the jwt and/or openId connect token, but I haven’t yet seen enough interest in this approach to? justify developing such a document. I will continue to look an opportunity to? develop that.

Bulk Data

The forthcoming bulk data specification offers a different solution for organizations looking to integrate appliance and server. The basis of this solution is that the bulk data client security is established at the system level, and can access significant amounts of data. This makes it possible for the appliance to perform interesting new functions.

For instance, when a user logs in with patient level access on the security appliance, instead of the appliance enforcing access control on each request,? the appliance could perform the following request on the FHIR Server in the background during the login process:

GET [base]/Patient/[id]/$everything

The appliance holds the user inside the authorization process until the $everything request is completed, and then uses an internal captive FHIR Server to provide complete services to the client based on the resource set returned by the FHIR Server. This allows the appliance to offer several improved services over the base server such as support for FHIR features not supported by the base server, or integration of record sets from multiple servers.

This approach is sometimes referred to as ‘decoupling’ the authorization server and resource server, but readers familiar with the details of OAuth will note that this is not decoupling in the direct OAuth sense.

Note that there’s some very evident limitations of this approach:

  • it doesn’t provide integrated audit trail in the base application
  • the information available to the client is frozen to what is available when the bulk data query is performed
  • it doesn’t easily provide for write access to the base server

All those problems are solvable to some degree or other, but require specific agreement between appliance and server. And alert users will note that it’s not really the bulk data access that makes this possible, it’s the system level trust that matters.

For this reason, some bulk data interfaces might include specific blocking arrangements to enforce the importance of the server’s authorization server; managing this would be matter for contracts.

I’m sure there are other ways to solve this problem. Comments are welcome, but rather than commenting on this post, I’d prefer it if people comment here.


Open Source is the worst

Last week I spoke at the CSIRO e-Health consortium. During my presentation, I said:

Open source products are the worst thing in the market.

Seth Godin

You might find that to be a shocking claim. I certainly did the first time I heard it (which was from Seth Godin giving a keynote at EclipseCon in about 2007 –  I can’t find no link now). But it’s really easy to defend the idea:

How can you sell something that’s worse than free?

A free product is the floor of the market. You can sell something better – but if your product is not better than open source, sooner or later your profitability will disappear:

Your business plan on the right if you’re not better than open source

Of course you might be able to extend your time in freefall with product lock in or regulatory capture… but sooner or later… splat!

So it follows, then, that if you publish open source, you’re not just giving something away for free, you’re changing the market. Everyone selling software has to respond – either improve or die.

Of course, it’s possible that open source is the best on the market as well as the worst. That’s already the case in some markets – very technical ones with low surface areas and big functionality volume (maths and technical tools). On the other hand, It’s not necessary that open source is or even will be the best. I don’t expect that open source will ever be the best for software that has user facing components and provides support for workflows that answer to business needs, which includes must EIS systems, notably EHR systems (though there’s already excellent open source options – e.g. openMRS yay).?

But we will see open source replacing infrastructure throughout healthcare.

Which reminds me: any organizations still selling healthcare standards material – your business plan is pretty much in the same place as the picture above. Sooner or later…

p.s. someone at the e-Health Colloquium asked me to post about this, they loved the idea so much. I couldn’t resist the clickbait title…

Clinical Messaging in Australia

The Australian Digital Health Agency is working hard on replacing faxing with secure messaging. Peter MacIsaac discusses one of the ancillary challenges this causes in Pulse IT today:

The second barrier to successful cross-transfer of messages is that the messages sent by almost all health services do not comply with Australian messaging or vocabulary standards.

Likewise the major clinical system vendors are not capable of processing a standard HL7 message, if one were to be delivered to them. Senders and receivers have each interpreted the international HL7 messaging standard independently of the agreed Australian standard and associated implementation guidelines.

I don’t think this quite expresses the problem – while there definitely are problems with non-conformance, there are also areas with the Australian standards are simply not detailed enough, and a lot of the problems are in this area. 

Peter also recalls that we discussed this: 

A collaborative effort to achieve networking by messaging vendors some eight years ago was run in a process facilitated by IHE Australia, HL7 Australia and the MSIA

Indeed we did, and we came up with a list of issues with implementations that went beyond non-compliance with the standards. I later wrote these up for MSIA, but the MSIA never published this document and pushed for conformance to it – another lost opportunity, from my point of view. Since the document was never released openly, here it is:

Looking back at this – the document format rules around pdf, rtf, etc are problematic – that’s the set of rules that we required then – and pretty much still now – to get true clinically safe interoperability. But I don’t think many implementers in the industry can actually implement them – they depend on libraries that just don’t have that kind of control. To me, this underlines the fact that clinically safe interoperability is always going to be work in progress, since we need better standards compliance than the wider industry (so far as I know)