Thu. Dec 5th, 2024

To 46 items for Round 1. A briefing document (see S1 Doc) was provided to give panel members the context of the exercise and to clarify its purpose. The statements were presented in survey form using the Qualtrics platform (www.qualtrics.com). These were prefixed by the text in S2 Doc to reinforce the background to the survey, and to emphasise that the goal was to seek consensus on how to identify children in need of extra, specialist help with language, beyond what is usually available in the classroom. After some initial items that asked about professional background, the 46 Round 1 statements were presented, and panel members were asked to rank each one twice on a seven-point Likert scale ranging from “strongly against” to “strongly in favour”, once for relevance (i.e., should a statement on this theme/topic be included in our guidelines?) and once for validity (i.e., to what extent do you agree with the statement as currently worded?). Participant responses to Round 1 were collated, the distribution of responses and associated anonymised comments were fed back to all panel members by PT. Responses and comments were scrutinised by the moderators, who remained blind to the identity of those commenting.PLOS ONE | DOI:10.1371/journal.pone.0158753 July 8,6 /Identifying Language Impairments in ChildrenThe adjudicator gave general advice on procedure, e.g. it became clear that the ‘relevance’ dimension was not needed, as all but two items were deemed relevant, and so this dimension was dropped in Round 2. Between Round 1 and Round 2 we departed from our original protocol in one respect: we discovered that a group in the Netherlands had conducted a Delphi exercise [30] attempting to specify criteria (which they termed ‘red flags’) for identifying major speech or language problems in different age bands. The importance of taking age into PD98059 biological activity account was one issue flagged in Round 1 free text comments, and so we decided to incorporate some of these items in Round 2. We also moved to a five-point rating scale for Round 2. On the basis of ratings and comments, and advice from the adjudicator, the two moderators agreed on rewording of some items, amalgamation of others, and removal of yet others. The set of items used in Round 2 consisted of the 27 statements shown in S4 Doc, grouped into three broad categories; referral for specialist assessment/intervention, assessment, and accompanying conditions. The S4 Doc materials also show the relationship between Round 1 and Round 2 items, and are colour-mapped to show extent of agreement with each statement in both rounds. Since it was evident that a short statement was not always sufficient without explanatory context, for Round 2, the panel was asked to read a background document giving a more detailed rationale for each statement before rating it (S5 Doc). In addition, we used hierarchical cluster analysis with Round 1 scores to see whether there were obvious patterns in responding related to either the country of the respondent or the professional discipline. This technique begins with a point as a cluster and then repeatedly merges nearest neighbour clusters until a single cluster remains. The pattern of clusters can then be displayed graphically as a dendrogram. The analysis was performed using the R statistical software [31] using the pvclust package [32], which performs hypothesis tests, via bootstrapping, to determine whether a cluster truly exists.Results and AZD1722 structure Discussion RoundS3 Doc is a report s.To 46 items for Round 1. A briefing document (see S1 Doc) was provided to give panel members the context of the exercise and to clarify its purpose. The statements were presented in survey form using the Qualtrics platform (www.qualtrics.com). These were prefixed by the text in S2 Doc to reinforce the background to the survey, and to emphasise that the goal was to seek consensus on how to identify children in need of extra, specialist help with language, beyond what is usually available in the classroom. After some initial items that asked about professional background, the 46 Round 1 statements were presented, and panel members were asked to rank each one twice on a seven-point Likert scale ranging from “strongly against” to “strongly in favour”, once for relevance (i.e., should a statement on this theme/topic be included in our guidelines?) and once for validity (i.e., to what extent do you agree with the statement as currently worded?). Participant responses to Round 1 were collated, the distribution of responses and associated anonymised comments were fed back to all panel members by PT. Responses and comments were scrutinised by the moderators, who remained blind to the identity of those commenting.PLOS ONE | DOI:10.1371/journal.pone.0158753 July 8,6 /Identifying Language Impairments in ChildrenThe adjudicator gave general advice on procedure, e.g. it became clear that the ‘relevance’ dimension was not needed, as all but two items were deemed relevant, and so this dimension was dropped in Round 2. Between Round 1 and Round 2 we departed from our original protocol in one respect: we discovered that a group in the Netherlands had conducted a Delphi exercise [30] attempting to specify criteria (which they termed ‘red flags’) for identifying major speech or language problems in different age bands. The importance of taking age into account was one issue flagged in Round 1 free text comments, and so we decided to incorporate some of these items in Round 2. We also moved to a five-point rating scale for Round 2. On the basis of ratings and comments, and advice from the adjudicator, the two moderators agreed on rewording of some items, amalgamation of others, and removal of yet others. The set of items used in Round 2 consisted of the 27 statements shown in S4 Doc, grouped into three broad categories; referral for specialist assessment/intervention, assessment, and accompanying conditions. The S4 Doc materials also show the relationship between Round 1 and Round 2 items, and are colour-mapped to show extent of agreement with each statement in both rounds. Since it was evident that a short statement was not always sufficient without explanatory context, for Round 2, the panel was asked to read a background document giving a more detailed rationale for each statement before rating it (S5 Doc). In addition, we used hierarchical cluster analysis with Round 1 scores to see whether there were obvious patterns in responding related to either the country of the respondent or the professional discipline. This technique begins with a point as a cluster and then repeatedly merges nearest neighbour clusters until a single cluster remains. The pattern of clusters can then be displayed graphically as a dendrogram. The analysis was performed using the R statistical software [31] using the pvclust package [32], which performs hypothesis tests, via bootstrapping, to determine whether a cluster truly exists.Results and Discussion RoundS3 Doc is a report s.