CIMI MTF Minutes 20121011

Revision as of 12:36, 18 October 2012 by Mvdzel (Talk | contribs) (Created page with "<center>'''CIMI Modeling Taskforce - Meeting Minutes'''</center> <center>'''Thursday 11<sup>th</sup> October 2012 @ 20:00-22:00 UTC'''</center> == Attending == Linda Bird G...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
CIMI Modeling Taskforce - Meeting Minutes
Thursday 11th October 2012 @ 20:00-22:00 UTC


Linda Bird

Gerard Freriks

Hendry Wijaya

Mike Lincoln

Sarah Ryan

Stephen Chu

Daniel Karlsson

Dave Carlson


Galen Mulrooney

Michael van der Zel

Rahil Qamar Siddiqui

Joey Coyle

Thomas Beale

Ian McNicoll

Larry McKnight

Jay Lyle

Heather Leslie

Eithne Keelaghan


  • Weekly news & updates
  • HL7 Model Submissions (Mark)
  • EN13606 Association Model Submissions (Gerard)
  • Results4Care Model Submission (Michael)
  • Review of Observation Reference Model Pattern
  • UML Expression of the Patterns
  • Lab Report Modelling Patterns

Weekly News & Updates

  • The Netherlands Meeting - confirmed (Sunday 2nd to Tuesday 4th)
  • CIMI Terminology Subgroup - met on Tuesday
  • Models submitted so far:
    • MOHH (Singapore)
    • IMH
    • NEHTA/openEHR
    • EN13606 Association - Lab Test example
    • Results4Care - Lab Report
    • HL7 (CDA documents)
      • Mark interested in small number of patterns and small # patterns -> discrete models
    • NHS Models (Temperature)

Detailed Minutes

Linda: The meeting for December 4th has been confirmed.

Gerard: Meeting is in the Northern part of Netherlands.

Michael: Should probably take 3 hours to get there.

Linda: Tuesday - the terminology Task Force Group met. Talked about Terminology Binding. Next week will continue. Models submitted - none new this week. Please submit as soon as you can. Is anyone a week or two away from submitting a model? The VA? Mike, Galen?

Mike Lincoln: I don't know if we will have anything.

Linda: Any news or update for the week?

Michael: I worked on UML. I did not have a chance to talk to Dave.

Phone (Dave?): Want to look at UML on Monday. Review of Observation-type pattern to use for labs. Can't develop lab archetypes until this.

Linda: Yes - this is critical. I was hoping we could show other three alternatives. Come back to granularity we have discussed. Apgar score. Blood Pressure.

Phone: [We should] make simplifying assumptions and then go ahead. We will be into November and not [yet] able to start archetypes.

Linda: Yes.

Rahil: Were you able to add NHS models? Body Temperature?

Linda: No. Thanks for reminding me. Gerard - I have Pdf Document: EN13606.

Gerard: Our approach was different from others. On a very simple level - subset of CIMI model. To use, must transform to something usable. We have one complex generic pattern that everything is derived from. [Pdf shown on screen].


Gerard: Try to harmonize around Systems of Concepts for Continuity of Care. A model that defines processes of clinicians working with patient. Look at system of concepts. What is clear - there are procedures, simple or clinical pathway or whatever, procedures are either out of order or not and (?) can be assessed.

Gerard: Have an observation. Four possible specialties(?): observation entry, evaluation entry, action entry, instruction entry. You order a procedure or execute [an action] or observe. Each creates its own generic pattern. Used in diagnostic phase. Record whether used with management aspects or (?) or ... Part of generic pattern for lab test. Each lab test is one entry. Artifact. Named object - can name an object. Give it a localization. For the value, we can record in terms of...

Linda: What shall I show on screen?

Gerard: Look at page 9.

****************************************************** page 9

Lab Test Models

All Lab-test models are derived from the SIAMS generic semantic pattern1.


Each SIAMS artefact is constructed in the case of those with the ENTRY class as starting point

using CLUSTER models.

- NameValue: holds the Name/value pair

- SCNModifier: allows to attach a state model, expression of certainty and non-negation

- ConfoundingFactors







- ToA: Type of Artefact

- Meta-Data


1 MindMaps are used to model the SIAMS pattern. Based on these MindMaps SIAMS artefacts or 13606 artefacts

are made depending on the Reference Model that is used. The way the patterns are modelled in the artefact editor

might differ slightly from the MinMaps. Some simplifications are made; redundant classes are removed; what

appears to be classes in the MindMap are modelled as attributes or other parts of the SIAMS pattern, etc.


Gerard: All lab-test models are derived from SIAMS. Reason we are using this artifact - Participation - is involved. Methods for observations are different from method for instructions.

Gerard: Go to pg. 12.

************************************************************************ page 12

The ResultValue allows the documentation of the result part of the Name-Value pair as:

- text

- codes

- numbers

- semantic ordinals

- images


When the SCN-Modifier indicates that there is a negation the meaning is that there is no


E.g. 'ENTRY:Observation:DiagnosticActivity', 'NamedObject=Blood Glucose',

'ResultValue=SemanticOrdinal ('moderately high' defined using inclusion and exclusion criteria)'

the meaning is that the Blood Glucose level is not 'moderately high'.


Gerard: See Results value. It allows you to record free text. Can add units of measurement, sensitivity, position, normal range, signaling range (what is normal for patient), and allows specificity.

Linda: What separate element for different data types?

Gerard: The numeric result allows you to enter... separate elements. So this is how we record lab test. The name object has name of lab test. This whole system - lab test - you order it (a separate pattern), execute (generate pattern), and observe lab test, and perhaps you can assess lab test - assess the status of execution of procedure. So Lab Test is 4 artifacts. This is what I wanted to present.

4 artifacts: 1) order 2) execute 3)observe (output) 4) pr(?)... (assessment)

Stephen Chu: It does not seem that there are provisions for quantification - type of value. Don't you see how you cannot have...

Gerard: The numeric result allows you to read number. Also allows you to record qualitative. Numeric results allows you to read...

Stephen: But I am talking about qualitative...

Gerard: You mean qualitative?

Stephen: Yes.

Gerard: Semantic ordinal.

Stephen: So is not qualitative?

Gerard: Allows you to express qualitative thing.

Stephen: But constrained to ordinal. For example, anti-microbial susceptibility test. Can't do in ordinal.

Gerard: Semantic ordinal is a pattern that allows you to specify a list... exclusion and inclusion material.

Stephen: Then you are overloading this variable.

Gerard: We see no other way to represent. This is an enumeration. Allows you to define a list. Also allows you to refer to value set.

Stephen: Scroll up to... Confounding factor in diagram. Lab Test Models - Siams Pattern. I would not consider method as a confounding factor... or participation... ...overloading confounding factor... confusing this pattern.

Gerard: We think it is important when have an artifact to know the part of patient. [For example] - when chemical, [that it is] related to hormone system. Important to capture as many semantics as possible in the structure.

Stephen: I am not saying not important. The reasons, participations, methods... they are not confounding factors. This is semantically inaccurate.

Gerard: I must think about name. You have me thinking about name.

Linda: Yes - confounding factor is used in different way in other models.

Linda (cont): Any other questions? Thank you, Gerard. We'll move on to Michael.


Michael van der Zel: I didn't make the model I wanted to share. We are still working on it.


Linda: So Lab Results represent group of tests.

Ian: What is difference between panel, cluster?

Michael: I don't know.

Ian: From CCD environment?

Michael: Yes


Michael (cont): Specimen container contains ID, Material, Methods. Possible to miss a lot of variables in this. We focus on what clinicians need to exchange in [process] of care. Not need all info.

Linda: I noticed - status on lab results, but not on lab tests.

Michael: Again, I thought you would ask this. I will look it up.

Michael: Move on to lab test. All tests performed on specimen and the results. The abnormal flags. Confidence. Reference ranges.

Linda: What is "Bepalingsdatum"?

Michael: The determination date - analysis date. Data type of results - type is "any". Determined by type of test.

Jay Lyle: Not a scalar or physical quantity?

Michael: Depends on test.

Comment: I was hoping, Linda, you would make comparison in spreadsheet.

Linda: I have started. Hope to show next week.

Michael: If you need help, let me know.

Ian: Close to OpenEHR. I think very close correspondence with what Michael showed and OpenEHR.

Question: So next week - Temperature?

Linda: No - first lab results, then Temperature. Next, Observation Reference Pattern. There are a couple of things after last week's meeting... Graham Grieve made comments on structure I presented.

Linda(cont): In Lab report, each lab order can have multiple results. And can have multiple orders. So not point in grouping orders and results together. That is why he removed this section. So if need to group requests with orders... can do this.

Michael: Is a lab order always necessary when transmitting lab results?

Linda: No - that is why it is zero. Also, reference range is wrong because not only need single reference range. In Singapore, have this.

[to Galen] I know you use single-reference range, min/max.

Galen: Single... holds both min and max.

Dipak: Suggestions. An example would be - need to vary ref range depending on gender, age, or due to [different] expected normal - patient with Thalassemia or not. May need to label the ranges, such as age and gender.

Galen: In the PHIN - only 1 reference range that is applicable to patient. But lab will have set of ranges. May be incorrect. May not know race of patient.

Linda: When conditions that ref-range depends on are not known.

Galen: So I would agree with Dipak. So I am changing(?) the PHIN to make it 0 to many.

Ian: In OpenEHR, have standard range and possibility to change... I looked at (?) - also critical ranges. Another example of Use Case.

Dipak: That is interesting, Ian. One of the difficulties is, what triggers an action may be complex. Starting to go from (?) to clinical guidelines. So when I interact with Informatics teams.. (?) want on their side of boundary.

Stephen Chu: Principle is... if patient-specific ref range, we use that and avoid stacking ref-range with multiple sets. But I agree with Ian about critical value. For that, to have ability to include more than one is useful.

Gerard: In 13606, we allow multiple normal value ranges. An item range - you can have any number.

Larry McKnight: Another consideration is a comment field of textual explanation: [such as] patient's platelets were clumped. Is that represented here?

Linda: Please introduce yourself.

Larry: I work in Siemens under Marc Overhage in Informatics. I am a physician. I was previously involved in HL7 patient-care.

Linda: One important issue brought up last week is difference in purpose of entry.


Linda: Define shared context, provide modeling governance unit, define indivisible atomic statements and developing collection... OpenEHR - more governance.

Tom Beale: That is probably not quite a correct representation of what I said. The idea of governance unit - has to do with distinct notion of archetype. A level of models whose purpose is to group into natural groups, i.e. things that co-occur in medicine. The point of things that occur together and are governed together - a group. Not a matter of entry or composition. Governance unit - to define the library of bits and pieces. Actual data set - things being collected from screen. In OpenEHR, that is a template. I put up something in blog. 3 levels - 3rd is reference level. You've got to have a level where you define things once and for all. Want to define BP once. That is 2 levels above reference level.

Linda: Was discussion on (?)

Tom: 2 principles - if work on archetype. Grouping - natural occurrence of things that would not occur apart. So, low density lipoprotein (LDL). Would not be reported apart from cholesterol. So things that co-occur, so don't have too many models. Other reason is because they are captured together.

Linda: Can you send up some notes?

Tom: I put definition on my blog. An archetype: The grouping is not applying [whether] they are collected together or are a template. Grouping to do with data capture, Model of Use.

Linda: What I want to get... Looking at what an observation pattern is.

Linda (cont): I want to recap Draft Decisions made.


Phone (Dave?): When to use entity, cluster?

Ian: Absolutely. You can do governance and review on a cluster. But better if one rather than 4 or 5 clusters - a bigger task.

Phone: Difference between top-level grouping and clusters?

Ian: Creates more overhead. But we do use this. For example, ambient O2 - cluster we use might seem a clean way of working, but... But can't be hard and fast about this clinical modeling. Tolerate variance. It doesn't matter. Models themselves are truth, will tolerate (?). But be aware of over-patterning.

Linda: We are looking for (?)

Ian: I am pushing back on this. The more you try and control, the harder. We try to over-impose a pattern all the time, but it doesn't help. It annoys those who want a pattern.

Tom: Entry level, Composition level, etc. The 2 important levels are composition and entry. The *minimum indivisible unit of information*, at least in OpenEHR, minimal...

Linda: I think that is the problem in OpenEHR, it is not the minimum indivisible unit of information. They can be broken apart.

Tom: Let me clarify. He meant indivisibility in the clinical sense. Diastolic BP floating around on its own is not going to mean anything. You can break it up, but can't make somatic statement about model. It is the groupings that make sense. Little pieces v.s. clinically designed.

Linda: The collection itself (?)

Tom: The concept of an observation... clinical event - that is the idea of what entries are to be captured.

Stephen: One way to look at indivisibility is context of use. If we can put context around (?).

Larry McKnight: Can I put out an example? The site is the left arm. This is meaningless. BP site? Wound site? So that should not be represented separately. But even for systolic, diastolic - it is fuzzy. Orthostatic BP - patient lying or sitting down - divisible or cluster?

Tom: One BP is potential entry. But can have multiple, represented orthostatic BP this way, 3 separate entries for 3 observations for orthostatic BP entry.

Heather Leslie: When I think of entry levels I think of standalone. A chunk of information that can be transferred. We say there are common patterns. e.g. O2 levels, dimensions, use across multiple... as cluster. So that notion of entries - clustering...

Linda: So, BP example, we looked at BP systolic, diastolic. Based on granularity, we agreed to group together. We may need to consider, when grouping, those measurements might have different ref ranges.



Linda: The other is Apgar. Single entry with 1,2,5,10 minutes, or single individual entries for panel collection.


Linda: General sense, people would prefer time series in separate entry and 1 minute, 2 minutes, etc. as separate entries and then... score as (?). Had discussed individual observation, grouping level 1 into level 2.



Gerard: To me, any name, value pair will be an entry since discrete point in time. You have multiple entries. One entry/cluster. Lab tests - one measurement is 1 name, value pair, is 1 entry in our way of thinking. BP, systolic and diastolic, are separate at different points in time. We consider each measurement = 1 entry.

Linda: All of context information needs to be entered for each entry.

Gerard: That is implementation issue. We can deal with that.

Tom: If use CIMI approach, still only need one archetype for time points and data points, but will have (?). So, want a model that says what the data points are and the time points. So that is part of the model. Want to get into same archetype. If multiple... at run-time. Just because have n-entries in data, not have n-archetypes. Need to be clear regarding instance data or model. Can use cookie cutter more than once.

Linda: Yes, implication not clear from diagram. Need separate entry for series as a whole to hold other information.

Stephen: I agree with Tom. So, 2, 3 levels not make sense. I agree with Tom, 1 archetype with time series, so at runtime, repeat collection.

Rahil: I think comment was to have ability to record different series. If Apgar recorded by different person for each... or contextual information. The reason for #3 option, 5 separate entries vs 5 separate (?) of entry. #1 does not allow that.

Ian: Difficult to imagine each single recording of Apgars. But if so, then create 3 entries.

Rahil: Why I highlight this is because what we decide for this will be a pattern we use for others. Need efficient pattern so we can address this issue.

Linda: Laboratory Result: Hypothesis


Linda: The Lab Result Group is where the context is shared in recording who performed... and Test Result Cluster is where individual observations are recorded. Test result cluster- inherits the context that it is in. So, once again, Test Result is modeled as Observation Pattern.

Stephen: Test Result Cluster might need to contain (?) if test done on difference specimens.



Linda: Whether specimen gets moved down or, alternatively, use a link.

Ian: I agree with Stephen. Difference between hematology types and other tissue types. Extremely well documented in HL7. I don't think we should try to... We have lots of examples.

Stephen: One good example. Results, Blood, need to link blood results to blood specimen and urine results to urine specimen.

Michael: In model I showed, based on CDC. So Lab Result contains one specimen and Lab Test [contains] no specimen. And Lab Results are grouped in Result container.

Ian: CDC is subset of other models. I just repeat - this has been done hundreds of times. I took from HL7. We don't want to repeat this. I fail to see why.


Linda: We need to include different models. Maybe wait till we have done comparison between models. Want to look at how get consistency between different test results. Mark would consider hemoglobin and platelet count to be independent. We have the question about grouping. Based on previous discussion regarding granularity. Who, what, when, where - same entry because some context. Whereas if have separate... From here - brainstorming. One option: have entry represent collection. The panel or result group.

Gerard: Agreement of panel? Can be a panel or test. Order and execute. Get details of specimen. When it comes to reporting, executing, separate observations are reported and can come in any order.

Linda: Different lab groups test in different ways.

Gerard: A panel is an ad hoc representation.

Stephen: But not matter. What is important is what makes clinical sense.

Gerard: There are many [that make] clinical sense. [Will] never be a universal panel.

Stephen: I don't think I can agree.

Gerard: Has nothing to do with definition of value-pair. Let's call it a measurement. Has a name and a result. But panel - can't have universal agreement on this.

Linda: I think must consider both approaches and consider both for CIMI. If individual tests, could have value or sub-item. Would need to repeat subset because each would need its own reference-range. Ignore timing [on slide]. Would have sub-item structure. Would repeat cluster - repeat whole observation, but maybe not in agreement with granularity or [another option].



Tom: Use-node v.s. slot. [see slide] Use-node has been defined once. Slot=a place to (?) external archetypes. Two kinds... You build archetype with cluster. Another person... makes a new archetype, often a cluster. Another person points to external reference... pointing to new cluster archetype. Another situation when you know a priori that at design time, don't know, can make constraint statement. e.g. no orders, otherwise open. So concept can be filled at design time, but commonly at run-time. So think of design-time (Use-node) v.s. run-time (slot).

Linda: Context could be inherited. But have entry pointing to entry.

Tom: If trying to point to (?), don't forget. Whatever is defined on a higher level structure pertains to all of the pieces. Don't need to put a pointer backwards. Will come out at run-time. The referential integrity of reference model - this is where it is normally true.

Linda: I can show... my map (Mindmap?). So sub-item would basically allow you to have a hierarchy where, at moment, [have] only a single reference range. So result sub-item would represent individual tests, whether hemoglobin or...

Rahil: Is result... do you need a sub-item since is 0 to many and could be replicated? Do you want holding class at runtime?

Linda: OK - so set of results. We could do that if not require additional level of hierarchy. So, the result itself would be Hemoglobin with ref-range and hematocrit with ref-range. Requires one level of nesting. Any comments?

Ian: I don't get why pushing back into nesting. Over-engineering.

Linda: Part is consistency between models.

Ian: Why does it matter?

Linda: Want query consistency.

Ian: You are assuming (?) will apply. Querying is required by models...

Linda: So we should let modelers model as they want?

Ian: Yes.

Linda: When query over multiple observations, how...?

Ian: Why looking for abstract values? We don't do those. We want sodium or...

Larry: I am not sure I agree. There is a tendency to over-structure. There is querying that goes on... e.g. Query for observations in last 3 days, might be hemoglobin or... Anything that is abnormal.

Ian: I agree with that. There are times we do need to query consistently, but other times, don't query on abstract level.

Galen: Where there is consistency - good. When give to programmers without clinical knowledge, don't want to have them re-create code, but also want them to understand, so some consistency is helpful as implementation...

Linda: Apologize for no lab reports.

Rahil: I partially agree with Ian regarding over-planning. But if have underlying ref-model and patterns, inclusive enough for each possible use-pattern. But, second, what trying to use with constructive binding and iso-semantics, will help modelers to do the best. Decide what want to do... constructive binding... The right balance that we need.

Stephen: I feel we are running around in circle. Probably only need to look at each of patterns according to who, what, when... rather than define rigid pattern. And let clinical modelers go ahead with refinement of model. I think we can achieve.

Linda: We should lighten up with what is in pattern and we should look more at different models. So, have more emphasis on terminology-binding.

Stephen: I don't think will put more emphasis on term-binding. Need to determine (?) that does not change. I would like to see [us] not go down path of rigid approach.

Mike Lincoln: I agree with what was said although I see your point.

Tom: Querying and consistency. Don't need consistency across models. [This is] what Ian said. So one group is dermatology. Another group is oncology. Different styles of modeling, but not matter because the data point you are after... does not matter if deep or shallow path. Only matters that there is a path. Need consistency with context and process-related data points. So not main thing with querying... is (lumping?).

Heather: Yes - you can draw as many patterns as you like but we break them. We just did. We thought we had Mindmaps well aligned, but when designed models we had to design differently. Will always find a Use-case where it doesn't work.

Linda: I thought it would be useful to id ref-range.

Heather: As long as true in model. This is where you need to go. That is why I like different entries. The clinical stuff - how you arrange - does not matter. Every time we have good pattern, we break it.

Gerard: To me, onscreen, we order a procedure, we execute a procedure. Four different patterns. Same pattern for order, execution, assessment. All share the same aspects. So when start to model from there... very generic patterns.