5 Things to Consider About Data Collection in AAC
As a rule, SLPs are pretty good about collecting data in their clinical work. Here are some of our prAACtical thoughts about data collection.
1. Don’t bite off more than you can chew. We’ve visited several programs where the client data filled a huge 3-ring binder. In some places, they logged the data daily, reviewed it frequently, and actually USED it to make programmatic decisions. If that works for you, great! But most programs only reviewed the data when they had to report it or prior to a visit by someone who might want to see and discuss those data. In those cases, the data really wasn’t serving it’s original purpose: to see how instruction might need to be tweaked for a client who was learning quickly, slowly, or not at all. The takeaway: Don’t collect more data than you’re prepared to review and put to use.
2. We should be tracking meaningful things. I love this social worker’s ideas for data collection at recess. She tracks things like the percent of time a child is engaged in play with others, the number of times they need help resolving conflict, and the number of times they are sought after as a play partner (as compared to how many times they seek out others). Now, that’s prAACtical!
3. You don’t have to collect data at every session. For my clients with semantic goals, for example, some sessions are geared to instruction. In those sessions, we provide focused language stimulation, explicit teaching, and lots of practice in different activities. We often don’t take data on those words until a future session. That’s okay. It makes sense not to take data on a word meaning after you just taught and practiced it, right? Don’t slip into autopilot and take data every session just because that’s the way we’re used to doing it. Take data when it makes sense to do so.
4. Don’t use data to make decisions until you’re confident that the data are accurate. We often assume that the data are correct, but it’s a good idea to check, particularly if you are a new at this. And you know what? Taking data on some of these skills isn’t as easy as you might think. Last month we were working with some beginning communicators who had goals like getting attention and signalling for more. Sounds simple, right? Well, it wasn’t. One person counted only communicative acts that used picture symbols. Another counted vocalizations and touches. Who was right? They both were because we hadn’t clearly defined what constitutes a request for attention or recurrence. Once we did that and everyone was looking for the same thing, we were able to get usable data.
Inter-rater reliability isn’t just for research. We often do this with our student clinicians at the start of a new semester, because unless the data are accurate it doesn’t represent the client’s actual performance. Any if we don’t have a clear picture of the client’s current level of performance, then setting a target for future achievement isn’t very meaningful.
5. The easier it is to record the data, the more likely we are to be accurate. I’m a huge fan of taking care of this with some “up front” work. If we spend some time planning and developing forms that fit our setting, goals, and system of prompts, then we are more likely to use those and get reliable data. Like everyone else, we’ll jot notes on scrap paper, make hash marks on our hands, or scribble some hieroglyphics on a strip of masking tape from time to time. But on a good day, we try to go into the session with a pre-prepared data sheet like this one. It’s a little different than most but I find it easy to use during a busy session. To use it, cross out I if the client’s response was independent, PP if a partial prompt was needed, FP if a full prompt was needed, or NR if there was no response. At the end, tally up number of each one and divide by the total number of responses. Then convert to a percentage. Easy peasy.
Filed under: PrAACtical Thinking
This post was written by Carole Zangari