Menu

Wananga landing Wananga landing
Topic

Cognitive Rehabilitation after Stroke

07 February 2024

UC has a Cognitive Rehabilitation after Stroke research group that develops adaptive, computer-based cognitive training for post-stroke rehabilitation. Learn more about our group.

APPLY NOW

Cognitive Rehabilitation after Stroke

Overview of the Project

The goal of our project was to develop adaptive, computer-based cognitive training for post-stroke rehabilitation. We focused on prospective memory (PM) as PM failure can interfere with independent living and result in, for example, forgetting to take medication, to switch off the stove or miss doctor’s appointments.

There is recent evidence in literature to show that using visual mnemonics might help improve skills used by prospective memory. Starting from those findings, we developed a comprehensive treatment based on visual imagery, and a Virtual Reality (VR) environment where stroke survivors can improve their prospective memory.

We evaluated our approach with three groups of people: healthy young people, healthy older people, and stroke survivors.

Having several studies provided us with a cross-dimensional view on how PM works in these age ranges and furthermore, if the treatment is a successful strategy for improving PM skills.

The first two studies were short, and focused on the effect of our visual imagery training only on the two groups of healthy participants. The third study involved a longer, more sophisticated visual imagery training, followed by practice in the VR environment.

The first study showed that young healthy people generally do not have problems with prospective memory. When asked to use visual mnemonics, high scorers in the treatment group seemed to do better and increase in recall items than their counterparts in the control group.

Low scorers however, either did not seem to benefit from using visual mnemonics or chose not to use visual mnemonics.

The second study demonstrated beneficial effects of our treatment on older healthy people, showing a significant increase in their ability to recall PM tasks after a short period of training.

In the study, the participants were taught how to use visual imagery by interacting with a computer-based tutorial (10 minutes) and later were evaluated on memorising and performing PM tasks by interacting with pre-recorded videos.

Even though the session was short (2 hours) there was a statistically significant difference in the participants’ ability to recall PM tasks. Using visual mnemonics and making the scenario personal and concrete in their minds significantly improves chances of recall.

The third study involved recruiting stroke survivors. Each participant attended ten sessions, over 10 weeks. We faced a lot of difficulties recruiting participants who met the inclusion/exclusion criteria, and we completed the study in October 2015.

We collected a lot of data.

The most important finding from the study showed that our intervention improved the PM skills significantly and the effect was stable (as measured four weeks after the end of treatment).

What is Prospective Memory?

When thinking about memory we often think about remembering past events: What did I do for my last birthday? What did I have for dinner yesterday? Where did I go for my last holiday?

Remembering things from the past is called retrospective memory. Retrospective memory only covers one aspect of memory. The other is called prospective memory.

Prospective memory is remembering future events. We use prospective memory very often in our daily lives: "I need to remember to go to my doctor next Tuesday" is an example of prospective memory. Anything that involves thinking about future events or planning requires prospective memory.

Prospective memory is essential to live independently and safely. Remembering the steps that we need to take before that deadline at work, or remembering to turn the stove off after a set amount of time requires prospective memory.

Interestingly, for prospective memory to work well, a person must also have relatively good retrospective memory. We must not only remember that we need to do something in the future but we also need to remember what we need to do.

The tasks that require prospective memory are often classified into two (or sometimes three) groups. 

Time-based tasks are tasks that need to be done at a certain time. For example, my appointment with my client from work is tomorrow at 2pm. At 6pm, I want to watch the news on TV. 

Event-based tasks are tasks that occur when a certain event happens. For example, after dinner I need to take my medication (dinner might occur approximately around a certain time, but the medication needs to be taken after the 'dinner' event).

When I go past the supermarket, I need to pop in and buy some milk. Sometimes, we also talk about another type of task - activity-based tasks. An activity-based task is very similar to an event-based task as the task occurs after an event. However, the events ae very closely related to one activity. However, we often view these as sub-tasks. Going out to play tennis might require a number of sub-tasks that trigger the next task. For example, putting on your tennis shoes might then prompt you to grab your tennis racquet. Each of the tasks required to get you to the tennis court might be classified as activity-based tasks.

People who have suffered from brain injury often also have problems associated with memory. Depending on the type and location of injury, a person might have reduced performance in both retrospective and prospective memory.

Prospective memory is often one of the main cognitive reasons for a loss of independence and even the need for long-term (and full-time) carers. A person's independence and safety often depend on the performance of prospective memory.

Often, this might lead to the person not being able to work. Sometimes, the person requires carers to ensure their safety is not compromised (e.g. the stove gets turned off after cooking, and the correct medication is taken at the right time).

Stroke is one of the leading causes of death and disability in our country. With our aging population (inverted pyramid) and the incidences of a stroke at younger ages, the need to provide cognitive support and rehabilitation is increasing.

The rehabilitation provided needs to be cost-effective and ideally (eventually) be customised to the individual's needs.

Customising it might mean having the training at the best times for the individual (this is particularly important for those who have suffered brain injury) or altering the levels of difficulty and providing additional support, etc.

Sub Projects

During this project, we conducted a few sub-projects either as Honours or summer projects.

Two Honours projects were conducted investigating brain-computer interfaces.

Electroencephalography (EEG) provides a means of accessing neural activity, allowing a computer to analyse information from the brainwave patterns produced by thought.

The Emotive EPOC is a commercially available 14-channel wireless EEG device. Its manufacturers claim that it can be trained easily to control robots, wheelchairs and play games.

There have been reports in the literature of using EPOC to control software by using facial expressions and thoughts. The device is inexpensive, and that was our motivation to investigate it further for potential use in the VR environment.

The first Honours project using EPOC was conducted by Matthew Lang in 2012, who investigated the usability of EPOC as an input device. He conducted two studies, both using healthy people.

The first study investigated whether it was possible to train EPOC to perform two different actions (moving a virtual cube left or right) in a short period (11 minutes).

The results of a study with 10 healthy participants were disappointing, as the participants achieved an average success rate of only 36 per cent. However, the actions performed were artificial, and therefore in the second study, Matthew developed a small software system where participants were asked to select an answer to a given question from three options by training the device for 15 minutes.

A study was conducted with 21 healthy participants, who trained EPOC for an average of 15 minutes and achieved an average success rate of 47 per cent. Some participants reported discomfort after about half an hour of wearing the device.

The conclusion was that EPOC required too much training time and therefore it would not be a good solution for stroke survivors.

The second EPOC study was conducted by Tegan Harrison in her Honours project in 2013  She focused on tracking user’s emotions using the Emotive EPOC device.

The affective state of the user is of high importance, as negative emotions (such as stress and frustration) may significantly reduce the effect of training.

Tegan first performed a study to compare the affective states identified by EPOC to those induced by a validated set of photographs. We have not found any significant relationships between the self-report scores and the emotional states reported by EPOC.

During the summer of 2012/2013, Sam Dopping-Hepenstal was awarded a UC summer scholarship, partly funded by our Marsden grant.

Sam investigated whether our VR environment can extend into a tool for testing a person’s prospective memory. He developed a modified version of the environment.

Initially, the tasks are presented to the user to memorise, and then the user is tested to determine whether he/she could remember the tasks. After that, the user can practise using the VR environment, and finally perform the test within the environment.

During the summer of 2013/2014, Anthony Bracegirdle was awarded a UC summer scholarship, partly funded by our Marsden grant. In the summer project, Anthony experimented with two devices: Razer Hydra and Oculus Rift.

The Razer Hydra is a set of two hand-held controllers that sense motion and can be used to navigate around the environment and interact with it.

The Oculus Rift is a VR headset that gives the user a sense of actually being in the environment with a stereoscopic view, providing a full 3D immersion in the environment.

Anthony conducted a study with 24 participants, who each tested the virtual system several times, completing a set of household tasks within the environment. Each participant trialed the system six times: three different devices for interaction (keyboard, joystick, and Razer Hydra) without the Oculus Rift and the same devices with the Oculus Rift.

The participants completed several tasks in a specific order, such as taking items from the pantry or turning on the radio. The participants then completed a short survey and rated their experiences with the devices.

It was found that users preferred the joystick for interaction and also that the Oculus Rift induced motion sickness in an alarming number of participants, with 18 of them experiencing motion sickness, 5 of those so much so they had to stop and finish the experiment early.

In 2014, Anthony completed his Honours project for which he investigated another input device, the Leap Motion controller. This inexpensive gesture-based device was released commercially in late 2013: the user places the device in front of him/her and gestures above it.

Anthony integrated the Leap Motion device into the VR environment and designed three different gesture modes, two unimanual and one bimanual. The first mode uses the airplane metaphor where the user uses their dominant hand, with the pitch of the hand controlling the forward/backward movement, and the roll of the hand controlling rotation, while the speed of movements is controlled by the inclination of the hand.

The bimanual mode is also based on the airplane metaphor and uses the dominant hand for rotation and the other hand for forward/backward movements.

The third mode is the positional one where moving the hand forward/backward and side to side is reflected in corresponding changes in the environment.

A study investigates the viability of the Leap Motion controller as an interaction device, and also to determine whether physical fatigue would be an obstacle to its use.

The study involved 30 participants, each using all three modes but in a random order, to decrease the practice effect.

The participants strongly preferred the positional mode and also strongly disliked the bimanual mode. The participants were also significantly slower using the bimanual mode. The two unimanual modes were competitive with the joystick.

The results of the study therefore show that the Leap Motion controller is a viable device to use in the VR environment.

Over the 2013/14 summer, Scott Ogden worked on the 'Creating and evaluating a model for a user in a rehabilitative virtual-reality environment' project as part of his COSC486 Research Project course.

In this project, a constraint-based model was developed for the VR environment. Users wrere given customised feedback and modeled according to their behaviour within the environment.

In 2012, we began investigating whether the VRE environment could be controlled using eye movements. In this project, Jon Rutherford used the Tobii eye tracker as the input device.

Tobii gives sufficiently precise information about the user’s eye gaze, which is robust to head movements. A version of VR controlled by eye gaze was developed, which allowed the user to move around the environment by looking to the left or right of the viewport. To select objects or interact with them, the user could blink. This has not yet been evaluated in a study.

A female member of the stroke rehab team with tie-back brown hair and wearing a colourful top working on a computer Member of the stroke rehabilitation team using heatmaps
Members of the stroke rehab team using virtual technology Members of the stroke rehabilitation team using virtual reality tools

  • Tanja Mitrovic (Primary Lead Investigator)
  • Moffat Mathews
  • Stellan Ohlsson (University of Illinois at Chicago)
  • Audrey McKinlay (University of Melbourne)
  • Jay Holland
  • Anthony Bracegirdle
  • Tegan Harrison
  • Sam Dopping-Hepenstal
  • Scott Ogden
  • Jon Rutherford
  • Katie Dainter (Psychology Research Assistant)

The funding for this project was provided through a Marsden Grant (end November 2014).

The Royal Society of New Zealand manages the Marsden Fund which supports excellence in science, engineering, maths, social sciences and the humanities in New Zealand by providing grants for investigator-initiated research.

Some of the sub-projects were also funded by the UC Summer Scholarship programme.

Interested?

If you are a researcher, clinician, or prospective thesis student interested in continuing this research with us please contact us.

Privacy Preferences

By clicking "Accept All Cookies", you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts.