Post by John Uliano on Jul 21, 2021 16:17:07 GMT
What are two thoughts that stood out for you in each of the readings?
In the Using Data articles I was struck by the importance of choosing KPIs that truly indicate progress towards meeting a company’s mission, as opposed to just reflecting an activity, and the importance of “starting at the end” to help identify what success looks like. I also appreciated the mention of using data visualization/infographics, as I feel that this can often make “scary” data more relatable to staff. My takeaways from the Analytics 3.0 reading centered on the use of open data and transparency, as well as hot “before big data,” so much time was spent on the preparation of data, as opposed to analysis itself, and when analyzed the data was descriptive, and not used to offer explanation of what occurred or predict what may occur.
These points stood out for me because they complement some of my challenges with the data collection I conduct in my current role with quality assurance. I feel we collect a lot of data on our team, although I often wonder how it is embraced by program, let alone if it is ever properly analyzed or used in support of our company’s mission.
How might you apply these learnings to your everyday work? Please give a specific example.
I would like to better evaluate and analyze the data that I am collecting from our programs. At times, I feel like I am just collecting and organizing numbers for the sake of it, and not truly considering the outcomes collected and how they can impact the program. While I have introduced ways to compare data from month to month (e.g., in our participant satisfaction survey process), but I do not analyze, or ask the necessary questions, as to why data may be changing monthly. I do not consider why a score has changed from one month to the next, what has maybe worked well, or has not worked. It is actually a pretty frustrating part of my work at present.
When have metrics mattered in your professional experience?
Metrics have always informed my experience in social service. Most recently, prior to joining the quality assurance team, my programs were focused on employment placement and retention deliverables. When working in a supportive housing setting, I was cognizant of vacancy rate and length of stay, as well as hospitalization rates. In quality assurance, we consistently measure adherence to indicators tied to our internal processes and procedures, which are based on best practice and funder requirements.
How do you use data in your current role — would you describe your team as data driven?
I think the Quality Assurance Team is data-driven. We pull together loads of data related to customer service, participant satisfaction, the quality of case note/service planning/assessment documentation, etc. We collect and report out on data sets every month and we have developed trainings based on the outcomes of our reviews, so we are reactive to outcomes. I think we minimally analyze it though and can identify ways in which to be more predictive, and eventually prescriptive, in our use of our outcomes.
In the Using Data articles I was struck by the importance of choosing KPIs that truly indicate progress towards meeting a company’s mission, as opposed to just reflecting an activity, and the importance of “starting at the end” to help identify what success looks like. I also appreciated the mention of using data visualization/infographics, as I feel that this can often make “scary” data more relatable to staff. My takeaways from the Analytics 3.0 reading centered on the use of open data and transparency, as well as hot “before big data,” so much time was spent on the preparation of data, as opposed to analysis itself, and when analyzed the data was descriptive, and not used to offer explanation of what occurred or predict what may occur.
These points stood out for me because they complement some of my challenges with the data collection I conduct in my current role with quality assurance. I feel we collect a lot of data on our team, although I often wonder how it is embraced by program, let alone if it is ever properly analyzed or used in support of our company’s mission.
How might you apply these learnings to your everyday work? Please give a specific example.
I would like to better evaluate and analyze the data that I am collecting from our programs. At times, I feel like I am just collecting and organizing numbers for the sake of it, and not truly considering the outcomes collected and how they can impact the program. While I have introduced ways to compare data from month to month (e.g., in our participant satisfaction survey process), but I do not analyze, or ask the necessary questions, as to why data may be changing monthly. I do not consider why a score has changed from one month to the next, what has maybe worked well, or has not worked. It is actually a pretty frustrating part of my work at present.
When have metrics mattered in your professional experience?
Metrics have always informed my experience in social service. Most recently, prior to joining the quality assurance team, my programs were focused on employment placement and retention deliverables. When working in a supportive housing setting, I was cognizant of vacancy rate and length of stay, as well as hospitalization rates. In quality assurance, we consistently measure adherence to indicators tied to our internal processes and procedures, which are based on best practice and funder requirements.
How do you use data in your current role — would you describe your team as data driven?
I think the Quality Assurance Team is data-driven. We pull together loads of data related to customer service, participant satisfaction, the quality of case note/service planning/assessment documentation, etc. We collect and report out on data sets every month and we have developed trainings based on the outcomes of our reviews, so we are reactive to outcomes. I think we minimally analyze it though and can identify ways in which to be more predictive, and eventually prescriptive, in our use of our outcomes.