CIMI MTF Minutes 20121018

From CIMI
Jump to: navigation, search
CIMI Modelling Taskforce - Meeting Minutes
Thursday 18th October 2012 @ 20:00-22:00 UTC


Attending

Linda Bird

Mike Lincoln

Jay Lyle

Stephen Chu

Stan Huff

Dave Carlson

Dipak

Galen Mulrooney

Michael van der Zel

Mark Shafarman

Heather Leslie

Peter Hendler

Eithne Keelaghan


Agenda

  • Weekly news & updates
  • Comparative analysis spreadsheet progress
  • Laboratory Results Group Model

Next meeting:

  • HL7 Model Submissions (Mark)

Weekly News & Updates

  • The Netherlands Meeting - confirmed (Sunday 2nd to Tuesday 4th)
  • Guests
  • Other CIMI meetings:
    • UML Taskforce met on Monday
    • Terminology subgroup met on Tuesday
      • New CIMI namespace id: 1000160
    • Glossary subgroup meet every 2 weeks on Tuesdays
  • Should we meet next week? NO
  • Should we arrange an informal catch-up at IHTSDO?
  • Models submitted so far:
    • Canada Infoway (HL7 v3 models)
    • MOHH (Singapore)
    • IMH
    • NEHTA/openEHR
    • EN13606 Association - Lab Test example
    • Results4Care - Lab Report
    • HL7 (CDA documents)
    • NHS Temperature Model

Modelling process

  1. Analyse clinical models submitted (with value sets)
  2. Identify maximal set of data elements
  3. Remove 'out of scope' data elements (Style Guide)
  4. Select appropriate CIMI Modelling Patterns(Style Guide)
  5. Define CIMI model structure (Mindmap, ADL, UML)
  6. Add Terminology bindings
    1. Meaning (nodes, node relationships)
    2. Value sets (maximal set from submitted models)
  7. Add Example Model Data Instances
  8. Technical Validation
    1. ADL, UML
  9. Clinical Validation / Review
  10. Confirm mappings from submitted models

Comparative analysis spreadsheet progress

Use cases include:

Creatinine clearance

Histopath

Issues to be followed up on:

  • Reporting priority vs Testing priority
  • Test Procedure/method included at both panel and test levels.

Detailed Minutes

Linda: Next week's meeting cancelled. [No meeting Oct 25th]

Look at comparative analysis spreadsheet. Have done initial draft. I'll send it out.

Mark to join today, but he will be presenting during our next meeting in 2 weeks.


Linda: There will be a meeting in the Netherlands.


Linda (cont): The IEC has discussed uninvited people attending the meeting. Someone attended last week. We need an official position when unofficial guest arrives. Also, to be able to invite others is a group decision. Virginia sent out an email about this. Please discuss with the group or Virginia if you wish. So we need to ask someone to leave if they attend. Virginia wants to record sessions and will look into this. Any questions?


Linda (cont): Also some other CIMI meetings. Dave - do you want to give short update of UML meeting? [Dave doesn't answer - unable to respond through gotomeeting]. Michael - do you want to give a review?


Michael: We reviewed ways to present in UML and need... better than ADL. Led to some proposed minor changes in ref model.


Linda: Also - try to take UML and generate ADL from that?


Michael: Yes.


Linda: Terminology subgroup met. Discussed nodes. Start to look at requirements for terminology binding in next few weeks. Please feel free to join if you wish.

Thanks to Harold for CIMI namespace. And Glossary subgroup is meeting on Tuesdays [?every other?].

Sarah Ryan suggested a catch up at IHTSDO meeting. Is anyone on his call going to meeting?

[to Jay] - Do you want to give summary of glossary work?


Jay: We added a... DCM... [did not hear]


Linda: Models submitted from Canada - refers to HL7. The modeling process - to analyze models submitted.


Michael: I will send out temperature model.


Linda: Look at most of models submitted. Need to look at CCD. Also, Canada model.

Also, identify maximum set of data elements. Want to show you draft of where we are with this.


************************ Spreadsheet with Lab Report *************************


Linda (cont): Took models and looked at... [Linda names each of the sheets]


Laboratory Test Result Item:


[Image-1]


Reference Range:


[Image-2]


Specimen:


[Image-3]


Anatomical Location:


[Image-4]


If you remember the diagram with different components to lab report. I went to FHIR web site - they changed their model. I haven't had a chance to talk with Graham. Diagramming different.

Specimen is done differently in different models. A result group has 1 specimen.

The other thing: Lab Result - Mindmap.


Mindmap Lab Result


Looked at Apgar Scores - multiple or...

First = single entry approach. If we want to split apart.


Heather: Can I point out a few [things]? You have specimen and anatomical... We separate these out and they get reused and reused. Bringing up modeling options: Can you give examples of who uses these this way? If only academic, then is problematic. So the Mindmap diagram - which system is this used in?


Linda: This is the first option.


Heather: The Mindmap you are showing - where is it being used, or is it theoretical? Want clarification.


Linda: Well, we can adopt one of the approaches, or take the maximal set of those and put this as... I don't think we have a system that has this...


Heather: This is a theoretical exercise and not sure if this is going to work. Part of the problem stated last week is where do you draw the line. Ultimately, need to put into system. If not based on real model that works, don't know if will work and may take years.


Linda: What are you suggesting?


Heather: Well - might take a long time. Why don't we look at those used in real systems?


Linda: That is why we are doing this analysis.


Stan: We can go about this a lot of different ways. One of the principles that I am trying to figure out is the idea... I am trying to avoid everyone having to learn everyone's models, or learn everyone's paradigm. I think it is a fair question. Is this a theoretical consolidation of models? Is this style working somewhere? And we can answer, if that eases your [discomfort]. Want to present and people say *There are essential parts of what we do that can't use this model.* And when we get more complex models, may be more of an issue. So I would back Linda. And I would ask the group. The Mindmap way of looking at this is good because when doing investigation, good to use Mindmap so don't worry about exact notation. I have been happy with the way it is going. Could go faster... But is useful to look at patterns the way Linda is doing it. So we can say - OK - we like pattern #2. And if you know a reason why it doesn't work, then [review]. So discern patterns and make higher level decision. This is pattern we want to follow. We can open up for decision, but don't want to discuss too long.


Stephen: The Mindmap you are showing - you attempt to harmonize the models. I suggest - consolidate down to 2 Mindmaps. One - consolidate all into one. 2nd - build on first one that has other data elements... models... and look at what they are and why they are... Comprehensive analysis... Until we are able to say what is different and why... we will be going all over the place.


Linda: I have only had the chance to prepare one pattern and I was hoping to present and get guidance.


Stephen: I think [it is a] good starting point.


Mark: I like comparing... We want interoperability. Want model... to have enough specificity...


Linda: Is group happy...? I want to get to ADL, but want to narrow down to option.


Heather: I am asking if Mindmap is represented... is it similar to a system in FHIR? Previously, 3 or 4 Mindmaps. Is this theoretical or is it being used in different places? I don't think you understand. Just continue.


Linda: Models. [shows spreadsheet - Lab Test Result]


[Image-6]


Linda: Intermountain - each model was about individual test item. Each would be grouped into a collection.


Stan: We can make panels. In current corpus there aren't panels. But we do make panels.


Linda: Good to look at whether shared by panels or...


Stan: You have: who did test, specimen descriptions - all at panel level. Better to review models and these things will come up.


Linda: So spreadsheet - looking at collection. FHIR report at (?) level. OpenEHR at (?) level. And CCD at Organizer...


Linda: Info about test procedure, results, when performed, timing. So, firstly - actual test performed [under Data Element spreadsheet]


Linda: So recording test performed. MOHH model - additional description.

Diagnostic type [Linda reads under each, model for FHIR, OpenEHR, HL7... result-type code]


Michael: I wanted to mention - ours should be similar to CCD since it is based on...


Linda: [Shows Michael slides] This test performed is with Panel Name. Where is the name of the Panel? If it is there let me know.


Michael: [Not sure?]. Will you be sending out spreadsheet?


Linda: Yes. I think it is important for each to go over mapping. So check for accuracy and bring issues to meeting.

[Lab Test Result]


[Image-7]


Linda: So that was a diagnostic-type(?). So specimen [Linda shows FHIR] - in FHIR they record - interesting - they have the specimen at report level and group level. 0 to many specimens at Lab Report, but [0 to 1 at Group level?].


Linda: In NEHTA - 0 to many at... level, and specimen detail, also 0 to * .

In MOHH - 0 to * at investigation/group level. At panel level - exactly 1 specimen.

HL7 - 0...* at the organizational level. Looking at result-item level. Only Intermountain...

So, most general way to do is 0 to * specimens, and whether we do 0 to * on test level...


Stan: Used to talk about USE Case. So, creatinine-clearance - need serum sample for serum creatinine and need urine sample for urine creatinine, and using both - can calculate creatinine clearance. What % of creatinine do kidneys remove - this is 1 of the things they do. So I can see at panel level, this is time we drew serum... and want to know this serum creatinine-level came from this sample. Could restate but would be redundant. Know is calculated. I think this is the Use-case. So it makes sense 0 to * at top and single specimen optimally allowed at (?) group.


Stan (cont): And we come to Healther's point - we haven't done it this way. But we send 1 result = serum result. One result=urine result. And then 1 result=calculated. Lab systems typically ignore this level of detail. Comes back to... what are people actually using and doing. So this is my understanding of why want multiple specimen at top and only 1 in individual result group level. So, if you wanted to indicate specifically, this was created... would be over-kill. It is not like the result is ambiguous if don't do this.


Stephen: I agree with Stan.


Heather: The other notion - Graham - he insisted to have specimen at both levels. When have multiple histo-specimens, have different tissues... need this.


[Phone-Peter H.?]: The idea that you would have different (?) at different levels is generic. Will come up with other, not just specimen. So have... times for any leaf(?). Not just solve for specimens. So want to solve - at any granular level - need time stamp.


Stan: How do we want to capture in terms of...? At some level these options are isosemantic. But our goal is to choose 1 preferred one. Can we answer this question as we go, and end up with preferred model?


Linda: Hoping to go through 1 section at a time.


Peter: When specimen drawn different when test performed... Can use timing specification. Any node at any level.


Stephen: We should look at all use-cases and determine how can accommodate all use-cases. We should capture all use-cases. The examples from Stan and Heather. Put these together.


Stan: I agree in a sense and not in another sense. People have working systems where has been discussed. Avoid going through all use-cases. Let's say based on our intuition - this is what is happening, and then each says *this works for my group* or *this does not work*. We will not get through if each has to understand each other's Use-case. Won't get anything done if we do this.


Stephen: We have people at the table... with use-cases. We'll be back to square one and each arguing...


Stan: No - I just don't want to go through each use-case, all together. But each does this... Not going to ignore any use-case. I only want to document preferred structure and if doesn't work for someone, we discuss. Don't want to define all requirements and Use-cases up front. Would rather choose one and have each say why it doesn't work, and then we can fix it. So only look at use-cases if it changes the structure of the model. Don't want to be exhaustive. Expect each of us to do this. Don't want to go through each.


Linda: So puts responsibility on each to do this.


Mark: I need to go. Apologize.


Linda: Most approaches have separate model for specimen and... use slot. So don't want to go through each specimen now. But will go through at different time. So, other thing, test procedure. In Singapore, priority of test procedure. Also, point-of-care test indicator. I assume it is out-of-scope.


Stan: I didn't think it was out-of-scope. I think it is in-scope.


Linda: OK - we find it is sometimes pre-coordinated into test name and...


Stan: Yes - There are big discussions in LOINC on this also. LOINC defines codes for Point-of-care Hematocrit vs. Hematocrit; Point-of-care serum NA+ vs. Serum NA+. However, point-of-care can alternatively be defined as a flag in the structure, as you have done.


Linda: Clinical information provided...


Stan: Meant to encapsulate things like last menstrual period.


Heather: It is clinical notes.


Stan: So if [patient] on 02 and doing blood gases. Or... add observation...


Linda: There is the potential to include other observations in the Encounter Details model.

The next 3 data elements: Placer Order Number, Filler Order Number, Accession Number.

In Singapore, these are recorded at the Panel level.

An Intermountain these are associated with the Order.


Galen: I've noticed Accession# is associated with specimen, not test.


Stan: Filler Order# - corresponds to the laboratory's identifier of the order. If > 1 specimen, then Accession# on the specimen. So Accession# - with specimen identifier.


Linda: So Singapore accession# is filler order #.


Heather: In OpenEHR, we have Laboratory Test Result Identifier to identify the results.


Linda: Test request details are on a separate worksheet. Should these numbers be associated with the test request? Should we move to lab request page since referring to report itself? All right?


Stan: Yes.

Heather: Yes.


Linda: So information recorded about test performed. [shows spreadsheet-Lab Test Result]


[Image-8]


This refers to Mindmap model [Shows slide]


Mindmap Lab Result


Name of test performed. Status of test performed - refers to if test performed rather than if completed.


[Specimen...

Priority...

Clinical information provided...]

[in Mindmap diagram]


Stan: Diagnostic Type. Hematocrit is always done in hematology. So know what diagnostic type is... Good to put there. Could someone describe why should put in individual result?


Heather: I discussed with Graham and he wanted it here explicitly... wanted it stated...


Linda: Include in model, but derivable.


Galen: That is how I do it in FHIM.


Stephen: Why need to mark as derivable? Is 0 to 1.


Linda: Diagnostic type is the ancestor of the Test performed, that belongs in a given 'Diagnostic type' reference set. By defining this derivation rule, people who don't want to store this element have a mechanism for deriving it.


Stan: Giving a little more information, because some 0-to-1 are not derivable.


Stephen: OK.


Stan: Also from HL7 discussions... I assume this is the priority of which something resulted. Can order [lab] routine, but result stat, or can draw now [stat] and result now [stat]. So a priority of how soon test done = priority, and then how soon reported = priority. So I assume this is resulting priority since the ordered priority will be in the Lab Request worksheet. Or is the model not distinguishing?


Linda: Hendry - are you there? No... I believe priority is for when investigation performed. I will take back to Hendry.


Stan: I think we have modeled performing priority and resulting priority. Not remove it from here.


Linda: [CEM browser on screen] - CoreStandardLobObs - reporting priority...


Stan: So in Lab Order, we have testing priority. How soon want test done. Whereas, this [reporting priority on screen] is reporting priority. So if said Stat, this would trigger to call someone. Other priority is key to person drawing blood - whether now or next draw. So that is why this [reporting priority on screen] is reporting priority.


Linda: Yes - important distinction. I need to investigate Singapore.


Stephen: Stan - Test performed priority and reporting priority... Why are we concerned with internal flow of lab processing in this model?


Stan: We're not talking about what goes on inside the Lab. This relates to what is in the clinical system. It has to do with how soon the results need to be reported to the clinician.


Stephen: But I don't care in Results cluster if... I as ordering clinician, I want results to be available urgently if I mark as urgent, so I only need to know.


Stan: How do I know the difference? The behavior is in the system - which is different.


Stephen: I overlooked Lab Use-case. OK.


Stan: We don't have additional descriptions.


Linda: Only from Singapore and Hendry not on this call. So I need to confirm.


Stan: Additional Description: I am assuming name is a coded element and have code and an additional description - are you doing this differently because different structure?


Linda: Thanks for getting me to check. The laboratory model that Singapore submitted does not have Additional Description - so I have removed this from the spreadsheet.


Stan: Also things missing that I thought would be there... Oh, no - they will be down at results.


Stan: In HL7, the clinical information provided can be a set of other observations. So I can send Fi02 in request, and send back at result. And somewhere else in record... know that this is Fi02 that patient was on at time test was done. I am talking about v2 of HL7. So, most labs - it comes back like it was a result as part of the panel. Not come back that was previously known. So blood gas, Oxygen concentration, inspired Fi02... So you are trusting clinicians to know is a passed through value, not in lab itself.


Linda: We are currently not analyzing v2 messages, as these were not a submitted model.


Stan: Not familiar with whether this is done in v3... encoded with lab observation.


Linda: We need to distinguish what came from lab or was part of the info from order...


Stan: What is 'Details'?


Linda: If we define a constrained archetype on this model, and need to extend with additional elements, then you define these as constraints on 'Details' .... So 'Details' provides an extensibility mechanism, to extend the model with new elements.


Stephen: Where is test method?


Linda: Perhaps I omitted that.


Heather: In protocol. Test methodology.


Linda: Apologies. I missed... in protocol. OK. [Looks in NEHTA] Test procedure - need to add.


Stephen: Important - indicates if can compare results.


Linda: [Adds Test Procedure]


Linda: Could you give Use-case where multiple...?


Heather: I think is standard...


Stan: I think 0...1 in model we are looking at.


Linda: Interesting to hear where this is in the other models.


Stan: Test Method [0...1] - [under CoreStandardLabObs]


Linda: For InterMountain - at individual observation.

In FHIR - don't remember seeing it.

So do we have Use-case with different test methods?


Heather: We would use a different instance. So only 1 test method at a time.


Linda: Stan - would you group if different test method?


Stan: Don't think so.


Linda: [to Michael] Results4care groups together tests with different methods?


Michael: No. But it is the NFU lab results model. I know William in another group working on lab results, not submitted yet. No - just a constraint. Says it should not conflict with that.


Linda: If you can get feedback from William...


Michael: William does not know this model - did not do with William, but with NFU.


Stan: Actually, with creatinine clearance, 2 different specimens, have potential for 2 different methods. True for the world, but I have never seen sent this way. But theoretically could leave to 2 different methods.


Stephen: When have multiple specimens, have multiple test methods.


Linda: So 1 test method per specimen.


Michael: State again the Use case? Asking if can have multiple test methods? But Stan said panel with multiple...


Stephen: An example: Sepsis - septic screen panel of tests. Blood, sputum, urine, swab...


Heather: Is that not a grouping of ordering? Order blood, urine... Send to different diagnostic services? Some to micro...


Stan: Yes - us too. Maybe ordered as 1 thing, but get back 3 things. They would send separately with 3 different...


Stephen: Send at same time, yes, but results will come back separately.


Michael: If I look at Lab Result, have lab results, points to specimen, has multiple lab tests and each has its own test method... So is flexible - multiple tests on same specimen.


Linda: So question is, when group lab results into container, always use same method.


Michael: No. Multiple tests on 1 specimen and each test has method.


Linda: Do you know of Use-case with this?


Michael: Oh - that is question.


Linda: Yes - Stan suggested Creatinine-clearance, and Stephen, Septic panel. Did we come to conclusion that might be multiple methods?


Stan: At panel level, even though put these things into an automated analyzer, at panel/collection level, to say these were all done on Coulter(?) or Chem7 or SMAC or... So what you are glossing over, inside of that instrument, they have their own methodology. So - whether done as spinning to get volume of packed red blood cells or calculated... Hemoglobin... cell size... Level of knowing procedure is clinically important. What type of equipment is not. At collection or panel level, could put all methods, but... At panel level, instrument type, but we don't see this. So I would not put at panel level, only result level.


Linda: At panel level, method [0..1], and test level...(?)

Now we have only 10 minutes.

So, Result - Result set identifier. The Result itself [Linda reads from each column]

Organizer component [HL7]. I assume Organizer components can be nested. Does anyone know for sure? Mark is not here.


Stan: I think so, but....


Linda: Looks like components could be another organizer. Are we de-scoping nested results, as per the Call for models? Does anyone have position on this?


Stan: I think scope of this is standard lab... We don't do further nesting.


Linda: Anyone... result group identifier? ...in-scope or out of scope?


Michael: In NFU model, when transform DCM into something you can implement...


Linda: Is there an assumed Lab-result id?


Michael: Yes - will always be ids.


Linda: I want to ask Mark. Result-set identifier on the organizer needs to use [0..*]. Or whether this is constrained to [0..1] for result organizers.


Heather: The NEHTA model has a TestResult Set Identifier in the protocol.


Linda: Thank you. Where? Why is it at test result details?


Heather: Because is identifier that comes into Lab - id different.


Stan: Yes.


Linda: So filler id?


Heather: Yes - probably is in wrong spot.


Stan: Is this an identifier of an instance of a result? An instance-identifier? Done on patient?


Linda: Yes - identifies the set of results.


Stan: A deep question - whether you show the identifiers in the model or assume them from the underlying structure. We're showing result identifier in each of the models. It is implied that will convert to XML structure. In our cases, inherited high up in the classes. A deep question of if you do it here. We like to do it here because all will ask *Where is instance identifier* because lab sends result with instance identifier and then sends update. I know if see same instance identifier. Is a subtle question - what should be in the model.


Linda: We are trying to deal without using representation... identifier not in CIMI reference model. I will send out spreadsheet. If everyone can go through their models they know well. We will get together in 2 weeks.