It’s on like Donkey Kong: Dissertation update

So here is my little ugly duckling of a dissertation proposal, with data collection planned for this Fall. Wish me luck! Of course my hope is that it grows into a swan that helps push the field forward on theoretical, methodological, and practical levels, but, to be perfectly honest, I'll be glad even if it grows into a little larger ugly duck.

Tentative Title: The Influence of Peer Review on Writing Achievement and Individual Writing Self-Efficacy

Draft Abstract: This study will examine the influence of peer feedback and review on individual writing achievement and self-efficacy. Undergraduate first-year composition students will engage in normal instructional activities, using the Eli Review program in order to conduct peer feedback and review sessions. Using the data collected from surveys and through the web-based peer review system Eli Review, the influence of giving and receiving writing feedback in peer review groups on both individual writing achievement and individual self-efficacy will be modeled using a multilevel, social-network analysis methodology. The influence of other possible mediating variables also will be explored, including: the influence of the instructor; the influence of outside help such as roommates, family members, or use of the university writing center, and the individual’s prior achievement. This study will contribute to understanding the influence of peers in the writing peer feedback cycle as well as the ways in which writing achievement and self-efficacy are influenced.

My Epistemological Leanings: or how I know I know

Recently for my Introduction to Qualitative Methods course, I was asked to identify my own epistemological leanings, specifically in the context of how I design research studies. I thought it would be useful to post those musings here and check back in a bit to see if they still hold. As always, comments and criticisms are welcome:

The root of the word “science” is to “know.” My epistemological leanings, my understandings of how I “know,” are heavily influenced by my undergraduate training in the biological sciences. The scientist is trained not only in designing and executing experiments, but also in observing the natural world. Charles Darwin, a notable scientist surely, grounded his theory of evolution in what is technically a case study of the Galapagos Islands’ avian residents. Science and research need not be limited to one methodology, for there is much that we do not understand, and it would be foolish to think that one method alone was the sure path to knowledge. I situate myself in a place that is driven by questions and methods that are strongly grounded in theoretical frameworks.  If there is an observable phenomenon that I have questions about, I consider those questions through the lens of a theoretical frame. I first consider what specific theories might predict about the phenomenon.  Nonetheless, predicting outcomes is not the only way to do science or to have knowledge. I consider myself post-positivist in this sense. I habitually think in terms of theories making predictions, yet I recognize that not every human action is predictable. Learning and the study of education are complex and messy, and not every situation will fit in the theory-prediction box.  As Erickson (1986) notes about school classrooms, “Interpretive researchers presume that microcultures will differ from one classroom to the next, no matter what degree of similarity in general demographic features obtains between the two rooms, which may be located literally next door or across the hall from one another (p. 128).” No matter how large the sample size or how robust the theory is, there will always remain a percentage of outcomes that remain unexplained. This ambiguity is where I find the entire “wild profusion (Lather, 2006)” so important.   I earnestly reject the quantitative/qualitative divide while recognizing that my own personal research brain tends to work best in a more quantitative environment. That being said, I joke that I am a positivist with critical theorist (and even post-structuralist) leanings.  I may feel most at home with a giant data set and a regression, but that does not mean that I don’t reject the dominant culture’s blind acceptance of the validity of these methods.  As a former K12 classroom teacher, I have the lived experience of the ways in which statistics and data are used as a hegemonic tool, one that often was used even to disempower teachers and students. As Lather (2006) stated, “Profoundly interventionist in the history of the welfare state, statistics has served as a political tool in the theatre of persuasion in a way that maps onto the recognized needs of policymakers (p. 49).”  I began teaching the year No Child Left Behind was enacted: I can’t think of a better example of political theater in which statistics played such a menacing part.  Social science has no option of conducting research in a vacuum, and as a social scientist, I feel it is my responsibility to not only advance the field, but do it in a way that is ethical.  My definition of ethical responsibility includes a responsibility to identify the ways in which my own privileges (as a middle-class person, as a white person, as an educated person, as a quantitative researcher, etc) inform my research.

 

References

Erickson, F. (1986). Qualitative research in education. In Merlin C. Wittrock (Ed.), Handbook of research in teaching (3rd ed., pp. 119-161). Washington, DC: American Educational Research Association.

Lather, P. (2006). Paradigm proliferation as a good thing to think with: teaching research in education as a wild profusion. International Journal of Qualitative Studies in Education, 19, 35-57. doi: 10.1080/09518390500450144

Thar’s gold in them thar hills

Pomona Mining Shack

CC Super Hanz

If you’ve been following along on this blog lately, you already know that right now I am in the inception stage of plotting my research practicum: a study that I will conduct during this second year of my program. This is seen as sort of a “mini-dissertation” complete with a round of convincing a committee that your proposal is a good one, all the way to defending the results when you are done.

I am a social networking gal, and I’ve been obsessed with Penuel et.al’s examination of teachers’ interactions in a school wherein they took a social capital approach.  I keep coming back to this idea, and every time I read this study I think: I WANT. Now, that being said, this IS NOT EASY TO DO AT ALL.  It is even harder considering I am a baby researcher and there is one of me, and this study had four really amazing researchers working on it. The whole section of analysis done by Ken Frank is all greek to me (haha! stats joke!), for I am not anywhere close to able to imagine how they go from point A to point B in this analysis. Fingers crossed I get to take his class before I’m done: Survey Design was offered at the same time and I really needed to take that one, too. Plus, I’ve been told that the other research methods course I’ll be taking (advanced multivariate) will assist me in better understanding the social network analysis.

At any rate, in my attempt to bite off a chunk of this type of work, I found some amazing databases on our state of Michigan Department of Education website, including a database of tech proficient teachers. I was thrilled. I am interested mainly in tech adoption by teachers, and here was one measure of tech proficiency for the entire state!  With all the other data collected on teachers in the state, I thought I could really do some fun analysis, checking for correlations with all sorts of other factors. It was gold!

Then I sat down with my advisor and we worked on my crazy spreadsheets, got them aligned in SPSS, and I realized: oh crap, there are ALL sorts of problems with this data.  These are not tiny limitations, these are they types of flaws wherein a lot of work will go into me knowing basically nothing about the rates of tech proficiency in the state of Michigan. D’oh!  Once I had climbed the hill and started digging, I realized those shiny objects were of little value.

Good thing I had a backup plan.

The lesson I am learning is that sometimes in research, my plans are not as good as I think they are. In my lit review, I stumble across the perfect article based on title and abstract, only to be let down in the reading of it. In my research proposal, I am on method SEVEN now as I shed each one for this or that fatal flaw. (I admit it stung a little when my advisor said, “I can’t even remember what you are studying anymore.” Ouch.  But true, oh so painfully true.).   I can only hope that through this method of thinking, proposing, poking holes, finding flaws, and starting over again I will end up with something I can get approved, carry out, and defend.  I would really like to discover the  magic formula so that the process could  become more efficient for me, so if anyone has any suggestions, let me know. I suspect that this is how it goes for everyone. Maybe it’s like learning any process: at first you have the slow, jagged movements of a beginner, and after a while those movements become practiced and fluid. I hope so any way.