[Vp-integration-subgroup] White paper revision

John Gennari gennari at uw.edu
Mon May 17 22:10:36 PDT 2021


All: About 9 of us had a lively Zoom meeting today to chat about the 
manuscript. By the end, it was a productive meeting, and I'm hoping that 
this email will capture some key outputs from the meeting. I apologize 
if I said some things that were a bit "inflammatory". Obviously 2 years 
would be much too long to get this paper out-the door.

I saw two outcomes. First, we had some nice ideas and discussion about 
re-ordering (initiated by Tomas Helikar). In the below, I'm going to 
propose one possible ordering, but this is certainly a work-in-progress. 
The reason that I think ordering is important is that it will give us a 
much better ability to write a strong concluding section, where we talk 
about themes and the larger arc of our ideas.

Second, we agreed that we should nominate "point persons" who would be 
in charge of at least the initial cut of each of the subsections. As 
Jacob pointed out, this information should be easy to get from older 
email and history of the development of the paper. During the zoom 
meeting, we associated some co-authors with some sections, but our 
coverage wasn't perfect (see challenge #12). Hopefully people will 
"stand up" and admit that some section of text is theirs.

So in the below, I include the original title of the section, a few 
words about the content of that section, and then a name (or several 
names) of co-authors who will be the "point person" to make sure that 
the appropriate content is included. Obviously, all co-authors can and 
should chime in on any part of the text, but the point person should 
make sure that the key ideas are included.

The basic ordering idea for the dozen challenges was to follow the 
life-cycle of model development, execution, sharing and integration, and 
eventually implementation. So...

*********************************************

*(1) "**Data**and measurement definitions*". Before you can build a 
model, you must have data. So data availability and measurement 
standards is the place to start.

*People: *Hana D, Jacob B

*(2) "**The variety of modeling languages*" This is about the choice of 
modeling languages, such as using SBML, CellML, or Matlab. As I said on 
the phone call, this is sort of about "syntax"--how do you write down 
your model?

*People:* John G, Jon K, Rahuman S.

*(3) "**The variety of modeling paradigms and scales"***Separately from 
modeling syntax, we must acknowledge modeling paradigms with very 
different semantics. Some clear examples are PDEs versus ODEs versus 
rule-based systems (and obviously one can combine these). Certainly 
semantics might impact syntax (the prior challenge), in that certain 
modeling language might be appropriate only for some paradigms.

People: James G, Eric F (?)

*(4) "**Units standardization*" A common reason that models are not 
reproducible are errors in units, or misunderstanding about units, or 
simply a lack of information about units.

People: Jacob B, Hana D

*(5) "**A lack of annotations in models*". Once researchers publish 
models, they must annotate the model so that others can understand it. 
Quality annotation is essential for both search and reproducibility.

People: John G.

*(6) "**Models are hard to locate"* If your goal is to reproduce, 
understand and possibly reuse or integrate some other model, one must 
first find that model. This requires annotation (prior section) and 
repositories (Physiome Model Repository, BioModels) and search platforms 
(ModeleXchange).

People: Jon K, John G.

*(7) "**Common platforms to execute models" *A model is pretty worthless 
as a static object. For folk to understand and reproduce models they 
must be executable. Alas, there is no single or consistent way of 
executing a model -- and of course, this interacts direction with 
section #2 and #3, above: Execution platforms are usually only for one 
modeling paradigm, and often for one modeling language. The 
BioSimulators work goes here.

People: Jon K.

*(8) "**Credibility **and validity of models*" Once a model is 
published, how do folk know it is right? Model validation is a big topic 
and challenge. Credibility follows (in part) from validation, but also 
requires transparency and reproducibility, etc.

People: John Rice, Jon K, Jacob B

*(9) "**Environments to adapt and integrate models*" As I see it, one of 
the end-targets for this manuscript is to better enable model 
integration, to build better models. There are many challenges with the 
task of integrating two (or more) models. (One that has recently been 
discussed is that even if model A and model B are valid and correct, 
there is no guarantee that the combined model A+B is correct. I liked 
what William Waites and Katherine Morse posted on this subject.) This 
section is where SBML-comp and SemGen environments can be mentioned.

People: John G.

(*10) "Challenges for stochastic models" *Special challenges specific to 
stochaistic modeling. An obvious point to mention is repeatability -- 
stochastic models don't necessarily give the same results with the same 
inputs.

People: James G., Eric F

*(11) "Licensing barriers" *Issues around "open source" and CC0 licensing.
**

People: Jacob B

*(12) "Barriers to model implementations and applications"* (I might 
suggest this be re-phrased for better clarity). What this section should 
discuss are challenges is getting a community to actually use models for 
"real-world" applications or decision making. This is more of a 
cultural/societal challenge, and thus seems like a nice big-picture way 
to end.

*People: ?? *I don't have any names here...

*********************************************

We didn't really talk much about it in the Zoom meeting, but there have 
been ideas tossed around about a "baker's dozen", i.e., adding a 13th 
challenge. We could also potentially merge some of the above.

The "point persons" listed above is obviously a subset of co-authors. 
That's fine and appropriate. Just for transparency, I follow what I 
think is pretty standard policy for authorship issues, and nicely 
summarized by theInternational Committee of Medical Journal Editors 
(ICMJE); see 2019 updated document at 
http://www.icmje.org/icmje-recommendations.pdf 
<http://www.icmje.org/icmje-recommendations.pdf> (Or see, below my 
signature, a summary of the key points of this document).

Finally, I've made the document editable by all at 
https://docs.google.com/document/d/1VvyP3YZQdQYjj8DFKOpQ4pn_0pdDGgiT/edit?ts=60a294c2 


-John G.
==========================================================================
Associate Professor & Graduate Program Director <gennari at uw.edu>
Dep't of Biomedical Informatics and telephone:206-616-6641
     Medical Education, box 358047
University of Washington
Seattle, WA  98109-4714 http://faculty.washington.edu/gennari/
==========================================================================

The ICMJE recommends that authorship be based on the following 4 criteria:

1.Substantial contributions to the conception or design of the work; or 
the acquisition, analysis, or interpretation of data for the work; AND

2.Drafting the work or revising it critically for important intellectual 
content; AND

3.Final approval of the version to be published; AND

4.Agreement to be accountable for all aspects of the work in ensuring 
that questions related to the accuracy or integrity of any part of the 
work are appropriately investigated and resolved.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.simtk.org/pipermail/vp-integration-subgroup/attachments/20210517/fb52e45c/attachment-0001.html>


More information about the Vp-integration-subgroup mailing list