Friday, 26th May 2017.  There has been quite a lot of courtroom discussion of PCAST along the lines of applying finding 3.  PCAST restricted the data that they use for foundational validity of Probabilistic Genotyping to papers published in the peer reviewed literature, they disqualified the use of casework samples, and the discounted papers published by the developers.  This greatly and I suggest unnecessarily restricted the data available to them.  In the interim the FBI interval validation paper has appeared in FSI:  genetics:  Moretti TR, Just RS, Kehl SC, Willis LE, Buckleton JS, Bright J-A, et al. Internal validation of STRmix™ for the interpretation of single source and mixed DNA profiles. Forensic Science International: Genetics.

I have written to Eric Lander 23rd April, 2017, 8th May 2017, to PCAST and Eric Lander 22nd May, 2017 and phoned Lander 26th May 2017 asking them to take cognisance of this additional data.  At writing I have not received any reply.

Monday, 16 January 2017.

We have been working with John Butler to develop and transfer information so that we are better positioned to advise such committees if the opportunity arises in the future. While this endeavour did not get quite as far as we would have liked, we are certainly open to its continuation.

At the meeting of the National Commission on Forensic Sciences on January 9-10, 2017, Michael Peat, editor of the Journal of Forensic Sciences, indicated that he will not publish internal validations.  Shortly after, Eric Lander presented to the same group and insisted on empirical proof published in peer-reviewed literature.

The initial PCAST report states:  “The scientific criteria for foundational validity require that there be more than one such study, to demonstrate reproducibility, and that studies should ideally be published in the peer-reviewed scientific literature.”  (Bold emphasis added.)

In the addendum, PCAST states:  “When considering the admissibility of testimony about complex mixtures (or complex samples), judges should ascertain whether the published validation studies adequately address the nature of the sample being analyzed (e.g., DNA quantity and quality, number of contributors, and mixture proportion for the person of interest).”

Neither of these quotes actually state a mandatory criterion of “published in the peer-reviewed literature,” although PCAST has stated this criterion in meetings.

The requirement for empirical proof of the validity of PG methods is fair enough. This leaves at least two open issues:

  1. The suggestion that this should be in the peer-reviewed literature;[1] and
  2. The extent to which the exact circumstances of the case need to be tested.

This would be a small problem, except for the influence this report commands.

To state the obvious, we cannot actually test every possible circumstance.  PCAST, in its addendum, lists DNA quantity and quality, number of contributors, and mixture proportion for the person of interest[2].  Replication, known contributors, multiplex, cycle number, CE machine, and injection strategy, however, are omitted.  Some of these factors are continuous variables.  For example, quantity, quality, and mixture proportion are continuous and hence we can never replicate the exact circumstances of any case.

There will be a need for some level of interpolation.  With PCAST, we did not discuss the details of what level of interpolation was acceptable.  This was partially because one meeting was dominated by a disruptive influence[3], but in truth this matter was never really on the agenda.

To come back to peer-reviewed literature, journals do not publish internal validations.  Developers can publish their first validation, but this is likely to be early in development and may not cover the full scope of later applications.  Given that, is it possible to achieve PCAST’s basic requests for empirical tests presented with open access and available for review in a practical way?  Quite simply, peer review is not a total guarantee of excellence.  Currently, we are assembling a compilation of internal validations and will submit them for publication.  There is no guarantee that this will be published.

[1] PCAST have dropped their insistence on the developers not being involved

[2] This is a change from the minor (not POI) is at “least 20% of the intact DNA…” that appears in the original report.

[3] Eric Lander handled this with great patience and tact.

13th January 2017.  After some very constructive interaction I am disappointed to see PCAST state:  “A recent controversy has highlighted issues with PG. In a prominent murder case in upstate New York, a judge ruled in late August (a few days before the approval of PCAST’s report) that testimony based on PG was inadmissible owing to insufficient validity testing.  Two PG software packages (STRMix and TrueAllele), from two competing firms, reached opposite conclusions about whether a DNA sample in the case contained a tiny contribution (~1%) from the defendant. Disagreements between the firms have grown following the conclusion of the case.”

To the best of my knowledge STRmix got an inclusion and TrueAllele an inconclusive which I would not describe as “opposite.”  Any controversy has been one sided and I certainly do not need the pot stirred any more than it is.  Perhaps this is an instance where peer reviewed or primary sources should be used rather than newspapers especially for such important documents.  I have contacted Eric Lander and John Butler and hope for a reply.

22nd December 2016 Phone conversation with Eric Lander.  Invited were Buckleton and Butler.  NIST stacked the meeting adding Richard Cavanagh and Michael Coble.  Discussion centered a lot on Hd true testing.

14th December 2016 Buckleton, Bright and Taylor submitted to PCAST (see below)

The President’s Council of Advisors on Science and Technology                  Wednesday, 14 December 2016

pcast@ostp.eop.gov

Dear PCAST,

You have specifically asked for us to identify any relevant scientific reports that (i) have been published in the scientific literature, (ii) were not mentioned in the PCAST report; and (iii) describe appropriately designed, research studies that provide empirical evidence establishing the foundational validity and estimating the accuracy of any of the following forensic feature-comparison methods.  We provide that information below.

  1. The paper: Searching mixed DNA profiles directly against profile databases (1) reports, inter alia on eight artificial mixed DNA profiles that were prepared by amplifying extracted DNA from three known sources with the approximate mixture proportions of 10:5:1. These and other mixtures constructed in silico were searched against a real database of size 145,470.  This is a substantial test for false inclusion rates.  These mixtures were also treated as four person mixtures.  This, in effect, tests mixtures of the type 10:5:1:0.  Again please note the emphasis on false donor tests.
  2. The paper: Testing likelihood ratios produced from complex DNA profiles (2) reuses three of the four person GlobalFiler mixtures (Table 1 mixtures 1-3) of which PCAST are already aware from (3, 4) however it adds substantial false donor tests of size 12,000,000 and 10,000 and 1,200,000. Whilst this could be viewed as reusing three mixtures, which it is, given the importance of false inclusions this move to large scale testing of false donors is significant. Also a recent publication “Importance sampling allows Hd true tests of highly discriminating DNA profiles” demonstrates the use of importance sampling to carry out false donor tests on profiles for which the discrimination power of the donors exceeds the ability even of standard simulations (let alone using empirically observed profiles). In this work 1, 2 and 3 person profiles were assessed.  This paper is “in press” at FSI:
  3. The paper: The effect of the uncertainty in the number of contributors to mixed DNA profiles on profile interpretation (5) tests the performance of STRmix on in silico mixtures.  These are made electronically not in vitro.  We recognise that this may be viewed as artificial but please give it a hearing.  This allows us total control.  For example we can make a stutter bigger.  We find this type of testing very valuable for investigating “rare” events such as a larger than expected stutter.  Again these are interpreted as n (where n = the apparent number of contributors) and n+1 mixtures, effectively adding a contributor at ratio 0.  These mixtures would be 3:1 and 3:1:0, 10:1 and 10:1:0, 3:1:1 and 3:1:1:0, 10:1:1 and 10:1:1:0.
  4. In addition, the FBI laboratory internal validation is back from the referees. This would not make the “published” criterion but must be available soon.  This looks at the mixtures outlined below:
Number of contributors Input DNA range              (per contributor) ng Contributor ratio range Number interpreted Number H1 true propositions tested Number H2 true propositions tested
2 0.05 to 0.9 20:1 to 1:1 106 212 22,504
3 0.021 to 1 16:1:1 to 1:1:1 66 187 13,620
4 0.05 to 1 19:1:1:1 to 1:1:1:1 84 336 17,808
5 0.0156 to 1 10:1:1:2:2 to 1:1:1:1:1 19 120 5,256

These have also been interpreted as n-1 and the 2 and 3 person mixtures as n+1.  As discussed this effectively adds a contributor at proportion 0.

  1. Recently, John Buckleton outlined to you a much larger compilation of internal validations and new experiments that is underway. This would be, maybe, a year away from publication but we will adopt your suggestion of placing it in the public domain before that.  The GlobalFiler experiment undertaken in South Australia is completed and much data analysis is done.  This covers:

2p mixtures 1:1 out to 200:1

3p mixtures 1:1:1 out to 20:10:1

4p mixtures 1:1:1:1 out to 50:25:10:1

5p mixtures 1:1:1:1:1 out to 10:5:5:2:1

with inputs DNA ranging from 1ng to 50pg run in duplicate on two 3500xl machines.

This material is offered to PCAST.  At this point it is not written up but we could do that for you specifically.  There is not a large number of different genotypes in this set which has concentrated on different ratios.  We recognise that this would not meet the “published in the peer reviewed literature” criterion.

May we ask PCAST to consider:

  1. It is not really how many true donor combinations you do that counts most since these inform false exclusions, it how many false donors you do, and
  2. We do not think we have a weak spot at low template (or high ratio). Our LRs do tend properly to 1 as template decreases.
  3. We find mixtures with components nearer equal to be more challenging for example 5:1:1. We have one case of low precision for this situation.
  4. STRmix needs pull-up, spikes, double back stutters and exotic stutters removed (for example 2bp artefacts at tetra nucleotide repeat loci).
  5. We cannot handle trialleles, peaks separated by 1pb that are unresolved at CE and PCR resulting in extreme peak height ratio differences that were not modelled during implementation.

Yours sincerely,

John Buckleton, Duncan Taylor and Jo-Anne Bright

References

  1. Bright J-A, Taylor D, Curran J, Buckleton J. Searching mixed DNA profiles directly against profile databases. Forensic Science International: Genetics. 2014 3//;9:102-10.
  2. Taylor D, Buckleton J, Evett I. Testing likelihood ratios produced from complex DNA profiles. Forensic Science International: Genetics. 2015 5//;16:165-71.
  3. Taylor D. Using continuous DNA interpretation methods to revisit likelihood ratio behaviour. Forensic Science International: Genetics. 2014 7//;11:144-53.
  4. Bright J-A, Taylor D, McGovern C, Cooper S, Russell L, Abarno D, et al. Developmental validation of STRmix™, expert software for the interpretation of forensic DNA profiles. Forensic Science International: Genetics. 2016;23:226-39.
  5. Bright J-A, Curran JM, Buckleton JS. The effect of the uncertainty in the number of contributors to mixed DNA profiles on profile interpretation. Forensic Science International: Genetics. 2014;12:208-14.

I met with PCAST 18th November 2016 with Mark Perlin and John Butler.  Mark left prematurely.

John Buckleton comments to PCAST  comments-on-the-pcast-report-to-the-president-forensic-science-in-criminal-courts-ii

Letter to Diana Pankevich, Ph.D.White House | Office of Science and Technology Policy

letter-to-diana-pankevich-ii

NDAA Press release  ndaa-press-release-on-pcast-report

I have spoken with Eric Lander.  He was very helpful but stated that our efforts to “publish” here did not count.  “If it is not in a peer reviewed scientific publication it doesn’t exist.”  He and John Butler have offered to help with publication. Eric helpfully softened his stance against developers publishing and said that the presence of neutrals amongst the developers would be sufficient for him.

DFS District of Colombia internal validation  strmix-validation

Erie County internal validations

strmix-implementation-and-internal-validation-erie-fusion

strmix-implementation-and-internal-validation-erie-id-plus

Scottish Police Authority  scottish-police-authority-summary-of-validation-of-strmixt-for-the-interpretation-of-globalfiler-profiles

DNA Labs international   strmix-letter-for-pcast_dli_2016-dna-labs

San Diego Police Department    strmix-mcmc

Michigan State Police  strmix-summary-msp

The views expressed in this site are my own and do not necessarily represent those of my organisation.

Advertisements