<div dir="auto">Sure William,<div dir="auto"><br></div><div dir="auto">Any answer is better than no answer, you and Gilberto are the two people interested in the topic so far. I suggest you make the edits in the paper and the response. Your text seems better than what I would have wrote anyway if no one volunteered </div><div dir="auto"><br></div><div dir="auto">And if Gilberto has ideas, I suggest he responds to the message and we can include a link to the discussion in the manuscript and response.</div><div dir="auto"><br></div><div dir="auto">Thanks for picking this up.<br></div><div dir="auto"><br></div><div dir="auto"> Jacob</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Jan 4, 2022, 05:15 William Waites <<a href="mailto:wwaites@ieee.org">wwaites@ieee.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Reviewer says: “This section seems to outline limitations of<br>
available data, but again makes no recommendations or proposed<br>
solution to any of the issues raised. Is this the intention?<br>
Most of the issues raised here reflect limitations of experimental<br>
science or data privacy, which likely cannot be meaningfully<br>
addressed by the modeling community.”<br>
<br>
The relevant paragraph is (I think): "Data availability to<br>
rationalize calibration and validation of models is crucial<br>
but often not possible because of data sharing policy and privacy<br>
(especially for individual human data). Moreover, undisclosed<br>
data from industry sponsored clinical trials used in model<br>
building and validation generally excludes many useful models<br>
from any assessment by the scientific community."<br>
<br>
There are some partial answers to this for personally identifiable <br>
data. One is to develop ways to generate synthetic data that is<br>
similar enough to the original data that it’s good for calibrating<br>
models and whatnot but does not contain information about any<br>
real individual. This is not easy to do and is an active area<br>
of research (especially for network-shaped data). We can simply<br>
point out that more research in developing the methods for this<br>
is needed. For validation, where you want to query a database<br>
based on model output to check that the output is consistent<br>
with what’s in the database, the differential privacy literature<br>
might help. That gives a way to put bounds on the information<br>
that leaks from the database when answering queries and those<br>
bounds can be tuned to whatever is considered acceptable. Again,<br>
more research needed for adapting this idea to suit model<br>
making needs.<br>
<br>
Would extending that paragraph along those lines work?<br>
<br>
Cheers,<br>
-w<br>
<br>
_______________________________________________<br>
Vp-integration-subgroup mailing list<br>
<a href="mailto:Vp-integration-subgroup@lists.simtk.org" target="_blank" rel="noreferrer">Vp-integration-subgroup@lists.simtk.org</a><br>
<a href="https://lists.simtk.org/mailman/listinfo/vp-integration-subgroup" rel="noreferrer noreferrer" target="_blank">https://lists.simtk.org/mailman/listinfo/vp-integration-subgroup</a><br>
</blockquote></div>