[Vp-integration-subgroup] [EXT] Re: [Vp-reproduce-subgroup] Credibility “for EACH USE”Re: White paper revision

Morse, Katherine L. Katherine.Morse at jhuapl.edu
Mon May 17 06:36:39 PDT 2021


I might have to do some digging to find it, but Dr. Mikel Petty* (UAH) did a proof that you cannot make any assertions about the validity of the composition of two valid models.

*The finest theoretical computer scientist I have ever known

KLM
---
Katherine L. Morse, PhD
IEEE Fellow
Principal Professional Staff, JHU/APL
11100 Johns Hopkins Road
Laurel, MD  20723-6099
(240)917-9602 (w)
(858)775-8651 (m) 
 

On 5/17/21, 5:22 AM, "Vp-reproduce-subgroup on behalf of William Waites" <vp-reproduce-subgroup-bounces at lists.simtk.org on behalf of wwaites at ieee.org> wrote:

    APL external email warning: Verify sender vp-reproduce-subgroup-bounces at lists.simtk.org before clicking links or attachments 

    Adding to this (sorry if I’m starting to sound like a broken record).

    If you separately accredit M1 and M2 for your purpose, and they fit together like lego, it does not automatically mean that M1 + M2 is credible. One of the things we would like to know is, what properties must M1 and M2 have for us to safely believe that we understand what M1 + M2 does. We only know this for a few special cases today.

    > On 17 May 2021, at 12:44, John Rice <john.rice at noboxes.org> wrote:
    > 
    > 
    > Monday morning thoughts:
    > 
    > Nobody, modeler, peer reviewer, editor,  software club, government agency, Government, NOBODY but ME can accredit ANY model for what I’M going to DO with it or its output.  Current language implies that, for example NASA or FDA, because they have guidance and are working on how to do it for themselves (EACH of their uses) can accredit a model for MY use/ purpose.  An increased risk is presented if anyone believes that because FDA accredited it for a piece of a clinical trial for a device for a purpose, that it can be used again without REACCREDITING for the new use.  Of course, if both the model and the first accreditation process and decisions are VERY WELL documented, the reuse accreditation MAY be pretty easy.  Repeated reaccreditations whether the decisions are to use or NOT use it build a sort of pedigree (?), which may make some models much easier to reaccredit for many uses. 
    > 
    > Therefor in the paper, consider a global search and replace: “accreditation for each use” in place of accreditation. (Then change back the few where that does not fit the sentence.)
    > 
    > What has to change is that the burden for  accreditation for use (AfU) is 99% on the USER.  The burden on the modeler is to DOCUMENT as you work (std practice for any reliable software development), wright and when asked questions speak the truth, the whole truth, and nothing but the truth about the MODEL 1st (validation), then the math (verification), then the code (repeatability); after which it is on the potential user to ACCREDIT it for there USE.
    > 
    > Stop making it sound like somehow models EASILY make things easy.  
    > 
    > User accountability may be a paradigm shift despite a few people having tried to make that point for 40 years. 
    > 
    > That said there may be some First Principle in some sciences that do not require reinvestigation as a practical matter. 
    > 
    > Companies know.  That’s why they don’t collaborate well together. Their models are understood well enough that they can reaccredit quickly so those are proprietary high value easily  reusable compete easily cost saving treasures. 
    > 
    > Libraries of “accredited models” are another dangerous idea as most people seem to think that they would work.
    >  
    > 
    >  
    > 
    > On May 17, 2021, at 06:33, Jacob Barhak <jacob.barhak at gmail.com> wrote:
    > 
    > 
    > Well William, 
    > 
    > Jonathan and you criticize some work done. However,  in a larger perspective,  let us remember that although disease models existed for about a century, this is still emerging technology. 
    > 
    > I have been working in the field for over a decade and I am a great critic of our current state and believe we can do better. 
    > 
    > Our technologies are still not good enough for prediction.  We really cannot predict,  moreover we still cannot fully explain the phenomena we see computationally. So every technique will have difficulties in forecasting. 
    > 
    > And when I write we,  I also include myself. I really wish we can do better. 
    > 
    > This does not mean people should stop trying. So instead of trying to dismiss other methods,  perhaps we should try to suggest how to learn from mistakes and improve things for the future. 
    > 
    > We have the collective responsibility of preparing tools for the future. And I think a positive tone in our message of what to do to improve things will do better than pointing fingers in this situation. 
    > 
    > I think we were able to do it ok in the paper so far,  for each deficiency we were able to show a potential solution.  Perhaps we should keep that concept. 
    > 
    > The credibility section in the paper attempts to address the issue you discuss in a subtle way. It hints that we should do better as modelers so that regulators will trust us. I think it serves the purpose you are both aiming for. Yet if you think a stronger message is needed,  the paper is now open for revisions and for discussions and you are welcome to make those. 
    > 
    >           Jacob
    > 
    > 
    > 
    > 
    > 
    > 
    > On Mon, May 17, 2021, 03:13 William Waites <wwaites at ieee.org> wrote:
    > > Also, to motivate the focus on credibility, it could be helpful to cite an instance where the pandemic models were wrong or where lack of credibility (trust) inhibited use of a model. For the former, one example that comes to mind is the widely cited IHME model which was substantially off in the Spring. 
    > 
    > Another good example is Friston’s dynamic causal model which is interesting in itself and apparently a useful technique in neuroscience but does badly for infectious disease, famously leading to the assertion that the reason the model results’ divergence from reality must be do to mysterious “epidemiological dark matter”. The debunking of this sucked up a lot of time from people who know better…
    > 
    > 
    > _______________________________________________
    > Vp-reproduce-subgroup mailing list
    > Vp-reproduce-subgroup at lists.simtk.org
    > https://lists.simtk.org/mailman/listinfo/vp-reproduce-subgroup
    > _______________________________________________
    > Vp-reproduce-subgroup mailing list
    > Vp-reproduce-subgroup at lists.simtk.org
    > https://lists.simtk.org/mailman/listinfo/vp-reproduce-subgroup

    _______________________________________________
    Vp-reproduce-subgroup mailing list
    Vp-reproduce-subgroup at lists.simtk.org
    https://lists.simtk.org/mailman/listinfo/vp-reproduce-subgroup



More information about the Vp-integration-subgroup mailing list