guys help im frozen in time

i post more on my FediPub Activityverse: @mothcompute@vixen.zone it is where i talk about all my fun projects


mononcqc
@mononcqc

This week's paper is a draft from Ross Koppel, Sean Smith, Jim Blythe, and Vijay Kothari titled Workarounds to Computer Access in Healthcare Organizations: You Want My Password or a Dead Patient? First of all, great title. This paper is a work of ethnography, where the authors sat and studied how people in medical settings did their work interacting with computers, and denoted all sorts of workarounds they'd take to bypass security rules that they judge are a hindrance to their work.

The idea behind the paper is that clearly, people behind the computer systems are not working from a realistic understanding of what medical professionals have to contend with to do their job. And maybe, just maybe if they sat and figured out how said professionals do their work, it may be different:

Cyber security efforts in healthcare settings increasingly confront workarounds and evasions by clinicians and employees who are just trying to do their work in the face of often onerous and irrational computer security rules. These are not terrorists or black hat hackers, but rather clinicians trying to use the computer system for conventional healthcare activities. These “evaders” acknowledge that effective security controls are, at some level, important—especially the case of an essential service, such as healthcare. [...] Unfortunately, all too often, with these tools, clinicians cannot do their job—and the medical mission trumps the security mission.

Mostly, the idea is that computer and security experts rarely happen to also be clinical care experts. What the paper finds through observations, interviews, and reports, is that:

workarounds to cyber security are the norm, rather than the exception. They not only go unpunished, they go unnoticed in most settings—and often are taught as correct practice.

They break down workarounds in categories, and they're just amazing.

Authentication

They note endemic circumvention of password-based auth. Hospitals and clinics write down passwords everywhere, sometimes as "sticky notes form sticky stalagmites on medical devices and medication preparation room". They've noted things like:

  • entire hospitals sharing a password for a medical device (the password is taped on the device)
  • emergency rooms' supply rooms with locked doors but the code is written on the door as well
  • vendors that distribute stickers to put your password on your monitor
  • computers with all employees passwords in a word doc shortcut on the desktop

Sticker distributed by a health IT vendor, stating 'You may use these stickers to write your username and password, and post it on your computer monitor'. The sticker has a link to the login URL, and a line for a username and a password

In general, this happens because no one wants to prevent a clinician from obtaining emergency supplies and someone dying because the code slipped their mind. In some cases, passwords are shared so everyone can read the same patient charts, even if they do have shared access. In some cases, bad actors can use this to mess with data.

But really even the passwords themselves are worse in healthcare. The paper states "the US Inspection General notes that NIST will certify EHR systems as secure even if passwords are only one-character long", for example.

Password expiry also gets a slam:

one physician colleague lamented that a practice may require a physician to do rounds at a hospital monthly—but that unfortunate expiration intervals can force the physician to spend as long at the help desk resetting an expired password as he or she then spends treating patients.

De-Authentication

This one is neat. After you authentified someone, you need to de-auth them when they walk away so their session ends and nobody surfs on their login. In some cases forgetting to log out can lead to abuse or mistakes where people enter information for wrong patients. Unfortunately, this is often undesirable as well and so they note the following workarounds:

  • defeating proximity sensors by putting styrofoam cups over detectors
  • asking the most junior person on staff to keep pressing the space bar on everyone's keyboard to prevent timeouts
  • clinicians offering their logged-in session to next clinicians as a "professional courtesy" (even during security training sessions)
  • nurses marking their seats with sweaters or large signs with their name on them, hiding computers, or lowering laptop screens to mark them as busy

One clinician mentioned that his dictation system has a 5 minutes timeout that requires a password and that during a 14-hour day, he spends almost 1.5 hours logging in. In other cases, the auto-logout feature exists on some systems but not all of them such that sometimes staff expect to be logged out when they are not.

One specific example of such usability problem is:

A nurse reports that one hospital’s EMR prevented users from logging in if they were already logged in somewhere else, although it would not meaningfully identify where the offending session was. Unfortunately, the nursing workflow included frequent interruptions—unexpectedly calling a nurse away from her COW. The workflow also included burdensome transitions, such as cleaning and suiting up for surgery. These security design decisions and workflow issues interacted badly: when a nurse going into surgery discovered she was still logged-in, she’d either have to un-gown—or yell for a colleague in the non-sterile area to interrupt her work and go log her out.

Which is an interesting way to see how compliance requirements can interact oddly with the reality on the ground.

Breaking the Representation

Usability problems often result in medical staff working around the system in a way that creates mismatches between reality and what the system sees reported.

One example given is that one Electronic Health Record (EHR) system forces clinicians to prescribe blood thinners to patient meeting given criteria before they can end their session, even if the patient is already on blood thinners. So clinicians have to do a risky workaround where they order a second dose of blood thinners to log out (which is lethal if the patient gets it), quit the system, then log back in to cancel the second dose.

Another example comes from a city hospital where creating a death certificate requires a doctor's digital thumbprint. Unfortunately for that hospital, there is a single doctor that has thumbs that the digital reader manages to scan, so the doctor ends up signing all the death certificates for that hospital regardless of whose patient the deceased was.

There's yet more for these mismatches:

  • the creation of shadow notes, paper trails that get destroyed because they are not wanted in an official formal record
  • "nurses brain" notes that list all tasks for a patient for their shift (something the computer does not support)
  • the creation of shadow notes because the computer doesn't allow enough precision
  • needing to note the operating room (OR) admission time precisely when the computer is 2 minutes away from there and won't allow future dates (on paper, nurses wrote now()+2 mins); so the nurse logs in, turns off the monitor, wheels the patient into the OR, then runs out to mark the record with a more accurate time

None of this is really surprising to me; any inadequate system seems to have a tendency to create its own shadow workflow that hides problems by working around them.

Permission Management

Access control plainly sucks, if I can be allowed the editorial tone:

Clinicians often have multiple responsibilities—sometimes moving between hospitals with multiple roles at each one, but accessing the same back-end EHR. Residents change services every 30 days during their training. If access is limited to one service, it needs to be reconfigured that often. However, a resident may be consulted about a former patient, to which he/she no longer has access. More frequent are clinicians who serve in multiple roles: the CMIO may need access to every patient record, not only those in her/his specific medical sub-discipline. A physician who focuses on infectious disease may also be on the committee that oversees medication errors, and thus requires access to the pharmacy IT system and the nurses medication administration system. In some hospitals, nurses sometimes authenticate as nurses and sometimes as doctors.

Also not surprised.

Undermining the Medical Mission

Many health IT systems are so bad they're seen as harming the medical objectives of practitioners.

The example given here is that some hospitals have tele-ICU, where patients must be monitored from distant nurse stations, which has a video feed and all the vitals relayed there. However, when bathing patients, the nurses have to cover the cameras to protect their privacy, and so the ICU can't monitor them adequately anymore.

There's also a case where a doctor couldn't find the required medication in the software. He found a custom field with free text where he noted the prescription, but the box was not visible on the other end so the prescription was never given and the patient lost half his stomach.

Finally, the authors circle back on the value of ethnographic investigations to properly adapt tools to work. They end by stating:

in the inevitable conflict between even well-intended people vs. the machines and the machine rule makers, it’s the people who are more creative and motivated.

I do also appreciate the 'well-intended' qualifier, and I felt like this sentence was a good way to end given current events.


You must log in to comment.

in reply to @mononcqc's post:

Just about twenty years ago, I worked for a division of a no-longer-extant conglomerate that produced software for hospitals, warehouse and store rooms, in our particular office's case. It remains one of the few jobs where I had to raise my voice regularly, because every decision involved someone saying "well, how can we know how the end users would use this," and me gesturing emphatically out the conference room window at the hospital across the street. I wanted to walk over and ask people for help, but they hated that idea, in favor of working from even-then-outdated advice like the password expiration nonsense mentioned early here...

at every company i have worked with i have make a fuss about talking to real users, and trying to push product and engineers to talk to the real end users - NOT just the upper management or financial people that placed the order

they are always hugely resistant, it blows my mind

I was a nurse at a healthcare company that had a group of clinicians come into the IT department to "give feedback on the software"--something we were all very eager to do. But then we spent the entire time doing basic testing like "when you click on the start button, does the program start?" and I never did figure out why they needed nurses for that.

In retrospect, maybe what really happened was that this was some kind of bizarre compromise between "talk to end users" and "end users don't pay our rent" internal factions. Fine, we'll involve end users, but for god's sake don't ask their opinions.

it might be, but it is just as likely that they believed that it was actually how they are supposed to do new user testing

but the problem is that you were not new users

and also the people generally tasked with setting things up have zero knowledge or experience in how to do useful testing in the first place

and most of the time they have an idea of what chart they want to see at the end and ignore all other data, making most of the data they do collect severely biased and useless

they eventually realize that it was useless and blame the users instead of their testing methodology

i have seen this all too many times!

Absolutely. I had high hopes, a few years later, when (what we now call) user experience people started showing up. But we quickly segregated them from the development team and, in a lot of places, mostly replaced them with graphic artists...

From non-medical IT, I recognize a lot of these problems also existing in, say, retail IT.

My comment on the very last thing is that this step is usually skipped because A) it would hurt someone's pride to admit the system needed work and B) it would cost money and take time, rather than letting them slap in some generic IT solution with a few extra fields filled in.

Our most recent IT system got installed after one hour of "training" for everyone, followed by almost a full year of taking no feedback whatsoever from anyone actually using it.

Wow, really good (if grim) read.

I feel like a lot of these problems could be expiated by alternate login systems. Something like RFID tags, or the same kind of system that car keyfobs use. Secure but fast. Or like, hardware USB keys that work as authentication tokens.

This way secure logins would be fast and transparent, and the IT personnel could set whatever level of security they want, and the nurses/clinicians couldn't degrade it, either!

It would improve both security and usability at the same time.

We do this!! I work IT at a hospital and we have two types of workstations; single-user and multi-user. The multi-user workstations are used by the doctors and nurses, and have an RFID reader that lets you just tap your ID badge to log in without typing anything. The users also aren't actually logging in, they're just authenticating with their credentials to unlock a system account, an account that has very limited access on the PC and a short timeout and will basically wipe itself every time someone else authenticates to unlock it, so no using someone else's login to do stuff

Sun, back when they did useful stuff, had a great smartcard system for this. You would sit down at any workstation or thin client, slide in your card and did it log you in? No, better! It would instantly pop up your existing session on the screen with all your work exactly the way you left it, just like opening your current laptop does today.

I've been infected recently (so I'm very late to the party) with the ideas of domain-driven design, to try and map the design of software to the domain of the expert users.

Maybe it was even attempted in this case, but only the business stakeholders were at the table and communicated their wrong imagined ideas of how the processes seem to work, instead of having someone from the trenches communicate how the processes actually work.

It may not be only the fault of software developers having no idea of how the domain works but also of the business owners having no idea how their company works at that level.

I’m not a medical professional but I am chronically ill as fuck and I’ve spent a few stretches in the hospital. One time I found out there were a couple Internet connected computers in the cafeteria that patients could use, and they were also used by staff on breaks. They had a single, shared login on a sticker or post-it or something and weren’t particularly locked-down.

One day, after doing my usual Internet Things and getting bored, I poked around the hard drive and found A TON of patient records. It looked like staff were using the account to share documents. I suspect that login worked on any computer on the network.

I did not tell anyone.

I wonder if it’s still like that.