At WWDC 2018 Apple announced ResearchKit 2.0, an open source framework aimed at enabling the use of mobile devices as a network of sensors for medical research. This release includes performance and UI improvements, support for documentation, community GitHub updates, and several new "Active Tasks."
Earlier this year, Apple made available its ResearchKit GitHub repository, and now Apple is expanding GitHub privileges to some external community members, giving direct access to ResearchKit repository and the ability to merge pull requests. In addition, Apple is changing its release schedule for ResearchKit 2.0, pushing to the stable branch some time after the initial push to master; this will enable the community to check out the new features, provide suggestions and submit pull requests.
ResearchKit 2.0 was updated to support the look and feel of iOS 11. The UI has been updated across the entire framework to closely reflect the latest iOS style guidelines. Footers and Buttons have been updated in order to enable a better and more intuitive user experience as participants navigate through your apps. Footers now stick to the bottom of all views and support new filled button styles. The cancel button now appears below the continue button. Additionally, you can have a skip button as an alternative to canceling or continuing. Progress indicators have also been aligned at the top right to allow for the new scrolling title implementation. Card View aims to enhance the look and feel of surveys and forms, and PDF Viewer aims to allow users to easily navigate, annotate, search and share any PDF.
ResearchKit 2.0 includes new predefined "Active Tasks", which provide ways to invite users to perform activities under partially controlled conditions, such as:
- Speech Recognition lets developers present participants with either an image for users to describe, or a block of text for users to repeat. The participant will record audio and when they have completed recording, a transcription will appear for them. Results will be generated, including an audio recording of what the participant said, a transcription of the output from the speech to text engine, and the transcription.
- Environment SPL Meter measures the current noise in the participant's environment. The task can be incorporated as a step into any hearing test or module and used as a gating step to ensure that the participant is in a suitable environment to complete their assessments.
- Tone Audiometry has been enhanced to include an updated algorithm and implementation to better evaluate a user's hearing. Tones will decrease in dBHL until a user fails an attempt, and then will increase again until a successful attempt.
- Speech in Noise test is another task that can be used to measure the hearing health of users. During this test, users will listen to a recording which includes ambient noises in the background as well as a phrase.
- Amsler Grid test is a task that can be used to collect data about a user's vision.
A new Parkinson's research sample app was added too. This app demonstrates how to leverage the new Movement Disorder API which is available within the CoreMotion framework. ResearchKit 2.0 is a beta release with updates and support for documentation, localization, accessibility and QA coming up in the next few months, requiring Xcode 9.0 or newer and a minimum supported Base SDK of 11.0.