Tuesday, January 15, 2019

Discussing Diversity Stats on Reading While White

Today I've got a guest post on the Reading While White blog, where I'm talking about what types of things librarians, publishing industry folks, and anyone interested in diversity in books for youth might consider when diving into the increasing number of data sets we have relevant to the topic of diversity in children's publishing. Take a look!

Reading While White guest post snippet

Sunday, April 15, 2018

There’s No Room for “Priceless” in My Advocacy

I’m rather disappointed by something that was shared with me online over the weekend, and that thing is a particular Libraries Transform campaign Because statement that apparently was voted the best submission to a National Library Week contest: “Because screen time can be pricey, but storytime is priceless.”

At first read—and that’s how the Because statements are meant to function, as quick and impactful points that hit home without requiring much though—it probably reads just fine. After all, yay, storytimes! Right?

Wrong. Let’s unpack.

This statement is at best an oversimplification and extension of false dichotomies that have been plaguing conversations in youth librarianship about how best to serve children and their families in our current landscape, which unequivocally includes technology. At worst, it’s outright exclusionary of the children and families whom we serve and who depend on devices for their very existence.

See, when we make statements like these, whether we intend to or not, we’re setting up a binary: in this case, with devices and storytimes at two opposite ends of a spectrum that has values of “pricey” and “priceless” respectively. The human brain is evolved to sort and categorize according to broad values, and so these sorts of binaries typically feel like they just make sense. Either/or, good/bad, strong/weak feel natural. The world makes sense.

Guess what? It’s a trap! Our brains have evolved to sort and categorize, but the categories we’ve ascribed as opposing ends of a spectrum are nothing more than social constructs. (We’re having more conversations recently about how binaries are false; if this conversation feels new to you, please take some time to research constructs of binaries related to gender, race, wealth, etc. It is your job as a human who works with other humans to do this work.) The reason this becomes problematic really quickly is that these constructs can typically be boiled down to two opposing positions: one thing has no value, and the other has all the value.

So let’s think about this Because statement recognizing that its syntax is deliberately constructed to have us ascribe zero value to one thing (i.e., devices) and supreme value to the other (i.e., storytimes). (If you don’t think that’s how this particular value dichotomy works, I submit to you the incredibly successful MasterCard campaign of yore.) This construction sets up a relationship in which inherent value is the bottom line descriptor. Storytime is inherently valuable and devices inherently valueless.

Value can be many things, but every interpretation here is reductive and false.

Devices cost money and storytimes are free, and we all know free is better than costly. What a ridiculous value judgment completely devoid of any realities of socioeconomic status or class.

Devices require a threshold for participation (cash, tech know-how, etc.) and storytimes are for everyone. This particular interpretation of value is complete and utter nonsense because storytimes are not for everyone. Do you offer storytimes in English? Your storytimes aren’t for non-English speaking families. Do you have storytimes only during weekday work hours? Your storytimes aren’t for children whose families work. Do you have behavioral standards for children who participate in your storytimes, and ask the “disruptive” ones to leave and try to come back another time? Your storytimes are demanding conformity to particular cultural norms and excluding children whose norms may be different. “Different” does not equal “worse,” but that’s the relationship this binary implies.

Devices inherently have potential personal risk and storytimes are universally beneficial. I call crap. On the one hand, there are many, many uses for devices that are outright ignored by those who like to articulate in every library forum they can find the same tired talking points about “what we don’t really know about technology and kids and so you’re a bad librarian if you use technology.” Once again, do your own research about the ways in which devices—in particular assistive devices, but devices generally—can and do positively impact the lives of children in our communities. On the other hand, to suggest that storytimes are always beneficial is utter nonsense. I read and hear all the time about librarians who share in their storytimes materials that deal in stereotypical representation of people of color, exclude any mention of persons with disabilities, show only one type of family (heteronormative), celebrate the traditions of only one culture (typically Christian), and otherwise diminish the histories and experiences of the very people gathered around our storytime rugs. Children who do not see themselves represented in storytimes—and in particular those who see their experiences excluded, negated, or denigrated—are not reaping the supreme benefits from storytimes that we like to think we’re providing. Rather, we may be causing more harm than early literacy goodness. Intention doesn’t matter.

Devices are bad and storytimes are good. This binary gets me the most annoyed because of how our own internalizations of bad/good get passed on to the kids and families we serve as though they were universal truths. If something is bad, and you use it, the implication is that you are bad. If something is good, and you use it, you are good. This easy use of the transitive property leads to straight-up shaming of parents and caregivers who are legitimately doing their best by their kids, all because their kids have touched an iPhone. Is one of your library goals making parents feel like they’re failures because of what they do with their kids, or because of what their kids do (or don’t do) in the library? Probably not, but by ascribing descriptors that translate to “bad” or even “not good” to broad tools like technology does just that. We shouldn’t assume we know how parents and kids are using devices, and ascribing a clear “bad” descriptor to them is both presumptuous and unnecessarily judgmental.

No matter how you’re opting to interpret pricey vs. priceless, this Because statement doesn’t tell the full story and in all likelihood perpetuates myths and behaviors—about both devices and storytimes—that negatively impact the kids we’re meant to serve.

Stepping away from the reductive values interpretations, there’s still the problem of why we’re even comparing storytimes to devices in the first place. They are not in a shared category to begin with, with storytimes being a service and devices being a tool. If there’s any relationship here, it’s that devices can be a tool to support the storyitme service—not that they are in the same category.

Let’s leave behind the false dichotomies readings here and finish with the worst case scenario of what this Because statement asserts: that devices have no place in storytimes, and so children and families who rely upon devices of any kind have no place there either. I have colleagues who can and do speak to this topic with more eloquence, expertise, and experience than I can, and once again I encourage you to seek out reputable resources should you find yourself needing some professional development in this area. For here, suffice it to say that this Because statement is essentially saying that children and their caregivers who depend upon assistive devices to communicate, participate, cope, and live are not welcome in storytimes. That’s disgustingly ableist.

Look, I get it. It’s catchy phrasing, and I’m hopeful that those who composed and promoted this Because statement didn’t mean to perpetuate unhelpful dichotomies or to exclude families who use assistive devices. But intention doesn’t count for anything in these conversations, no matter how pithy an advocacy statement may sound. Statements like this one only serve to oversimplify important issues deserving of real consideration and research, or to judge and alienate those children and families we’re mission-driven to serve. I don’t have room in my advocacy toolbox for that kind of thing.

Thursday, February 15, 2018

Hands-on Learning & the Future of Learning

Libraries are at a juncture right now--a sort of identity crisis, if you will. Many libraries are reporting downward trends in circulation and other traditional output metrics. Yet many libraries are also seeing upticks in programming, gate count, etc.--indicators of participation in library spaces beyond materials use.

Libraries are using this juncture as an opportunity to assert ourselves as spaces for other valuable community resources, too: as hubs of democracy, as community hubs, and as learning centers, among other supplemental brandings. At ALA Midwinter’s Symposium on the Future of Libraries last weekend, I convened a panel of four librarians who, in our varied roles within varied institutions, have been thinking specifically about that role of libraries as learning centers. We shared what we’ve been thinking about learning in the public library. Today I’m sharing my perspective from my vantage point at my library.

At Skokie, we have age-specific program coordinators in our Learning Experiences department who operate on a level between the management team--who sets the strategic direction and goals--and the front-line programming staff, especially those in the youth and adult departments. It’s the program coordinators’ responsibility to translate the library’s programmatic objectives into practically implementable program strategies and offerings. We do this by taking the goals set by the management team above us, combined with knowledge we have about programs and attendees from the front-line programming staff, and figure out the best way to implement our institutional goals into programming priorities.

Based on what we’ve seen in program interactions in the past few years, what we’ve heard from our community, and what we’ve learned through our own explorations into the scholarship of learning, we’ve identified hands-on learning as a major priority for how we’re approaching learning-focused programming. Educational research points to hands-on learning as a successful strategy for facilitating meaningful learning at any age; in many educational communities, you’ll hear the phrase “hands-on learning is minds-on learning.” We’re taking what we’ve learned about hands-on learning and defined our focus broadly as workshops and opportunities that facilitate skill acquisition, practice, and ultimately creation. Hands-on learning is a core lens through which we’re considering, developing, and implementing educational programming options across all ages. If the library’s strategic goals and objectives are the bright light of a flashlight, hands-on learning is a lens that refracts that light into three key colors--or, in our case, three modes of thinking about hands-on learning.

Our first mode for thinking about hands-on learning is intentionally facilitating skills acquisition and application. We’re thinking a lot about how we can move beyond one-time exposure to a skill and better support patrons in creatively applying the skills they learn. Behavioral scientists call this moving from a task being “explicit declarative”--where you have to actively think about every step of a skill in order to execute it successfully--and moving to a task being “implicit procedural”--the type of thing you can execute and apply without much conscious attention. A great example of this in our library is our ongoing Be the Chef program series, a hands-on cooking class that incorporates simple cooking skill practice as well as following and executing recipes. Think about something like knife skills--that’s a perfect example of a discrete skill, and one that you really have to concentrate on when you first learn it. With time and practice, however, knife skills become just another skill in your overall toolbox. Participants no longer have to concentrate so hard on using a knife, and instead are able to apply their skill to larger projects. They’re able to do more and do things creatively because of a foundational skill they’ve acquired and refined.

Our second mode for hands-on programming is offering multi-day sustained learning opportunities--or more specifically, boot camp-style programs. These are multi-day programs on consecutive days that allow a smallish group of people--typically 12-16--to learn some basic skills, then put those into practice through guided challenges. For a robotics boot camp, for example, day one is about setting the stage with the basics: what is coding, what is the language and platform we’re using, what are the robots or objects we’re programming, etc. We spend that first day learning the foundations and taking baby steps putting it into practice. Then on day two, and sometimes day three, we really dig into those basic skills and creatively apply them in pursuit of a design challenge. It might be battle bots, or it might be robot races, or it might be creating a choreographed routine for a robot to follow. Whatever the challenge, it’s rooted in building sequentially on the basic skills. Multi-day sustained learning allows us to achieve deeper, more meaningful learning outcomes than traditional one-off programs, too, in which it might be days, weeks, or longer between when a patron comes to a program to learn a skill and when they ultimately have the opportunity to return and put that skill into practice. Multi-day boot camps build sequential learning over time into the fabric of the program.

CC BY-NC-SA 2.0, Skokie Public Library
Our third mode for hands-on learning is providing ongoing, facilitated access to equipment and supplies. This goes beyond traditional programs with finite learning goals and start times into the realm of staffed spaces like makerspaces, STEAM spaces, or DIY spaces. Whenever possible, if we’ve offered a class, a workshop, or an initiative centered around a particular skill--and especially a specific piece of equipment that the library has purchased--we’re going to think about ways to support patrons who dipped their toe into that skill in continuing to practice and apply their learning in a setting that works for them. With a program like Open Sew, for example, patrons who attended a Sewing Basics program--or even those who have basic sewing skills but no machine, or it’s been a while since they used a sewing machine--those patrons can come to our open hours with the sewing and embroidery machines and use the equipment. There’s always a staff person present who can help troubleshoot and give basic guidance, but really these types of facilitated learning times are about allowing patrons to practice, refine their skills, apply what they’ve learned to make discrete projects or to be creative. Open Sew becomes a hands-on learning opportunity where one patron is sewing a hem on a skirt, another is starting to piece together a quilt, and another is looking to talk to peers about sewing machines and simple projects.

These are the three modes of hands-on learning that we’re applying to our program offerings at my library--the program coordinators’ interpretation of what we can offer when we consider both institutional goals and the learning needs and interests of our community. I’m looking forward to applying the lens of hands-on learning as we think about our next round of programming.

How are you thinking about learning in your library?

Wednesday, November 29, 2017

The Boot Camp Model for Deeper Informal Learning

A few years back, I read about the Fayetteville Free Library's forays into Geek Girls Camp--a weeklong summertime program during which the same group of girls came to the library every day to explore, learn, and build on what they were doing each day. It's an intriguing model: more sustained learning for a core group of program participants than a typical one-off program, with the resulting outcomes of greater increases in skill and confidence among the attendees.

Now I don't know about your library, but at mine, we'd have to majorly shuffle around our program schedule in order to offer even one single all-day, weeklong camp program like this one. Considering our already robust, well-attended program schedule, reducing typical programs to add one really, really big one like this just isn't feasible at our current levels of capacity and community participation. The idea kept rattling around my brain, though. After all, if there's a way to support deeper informal learning for kids in the library, it's something I want to seriously consider. And consider it we did.

That's no rave... that's a Robot Dance Boot Camp!

Starting with winter break of last year, we've made our own foray into the camp-style program for kids: boot camps. For us, the boot camp model means a few things in terms of program formatting:
  • Each boot camp program has a core theme, with all activities taking place during the sessions tying into that theme.
  • Each boot camp has an intended age of attendees that will facilitate age-appropriate peer learning. We tend to offer boot camps for grades K-2 or grades 3-5, not spanning the elementary age range too far in any single program.
  • Each boot camp meets at least two days in a row, potentially three. Depending on the age of attendees and content planned, each session ranges from 60-90 minutes long.
  • Boot camps take place during weeks school is not in session: winter break, spring break, and during the summer.
  • Boot camps benefit from multiple instructors: as a minimum, we try to have at least one lead instructor for all sessions on a theme with another staff member to assist each day.
  • The boot camp topic determines the ideal number of participants, with a standard range being 12-20 kids.
A lot of these best practices for our kids' boot camp programs came from years of learning what does and doesn't work in our community when it comes to programs in a series--which is essentially what a boot camp program is. In years past, we'd offered multi-week series programs; for example, a 3D printing program that met every Monday for four weeks. As you can imagine, even though interest was extremely high for these programs, attendance was rough--we found it was difficult for families to commit to attend across multiple weeks because family schedules just aren't that consistent. And so we developed a boot camp model with back-to-back sessions on consecutive days, making it easier for families to schedule their kids to attend all sessions.

We also require that all attendees participate in all of the offered boot camp sessions. That is, if it's a three-day boot camp, the registered child can't be planning to miss even one of the days. That's been something of a shift from our default attitude about program attendance--many families had gotten used to signing up and then deciding to show up on the day-of, rather than clearly committing to attend or canceling should they be unable. That more lenient mode works for us for one-off programs, where we then fill vacant spots with wait list or walk-in participants. Not so for multi-day boot camps, however, where each day builds upon the last. Kids need to be present for all days for the boot camp to be meaningful.

To that end, we employ two core strategies. First is very detailed reminder calls to all registrants. Our library's program assistants call every single family with a child registered for a boot camp, and during that call they remind them a) of the schedule and b) of the expectation that the child will attend all days. Then the program assistant asks, "Will your child be able to attend all sessions?" If the answer is "no" the program assistant once more explains the expectations before removing the child from the registration list. Any time we remove a child from a boot camp because they won't be able to attend all days, we follow that explanation with an invitation to attend a similar upcoming one-off program event. So while a child might not be able to come to a boot camp because they're not available both days, we still leave them with options for other library programs. (We do something similar if a kid shows up on day 2 having missed day 1.)

The second strategy to facilitate all-days participation: the coolest stuff happens on the last day. If it's a boot camp with a creative or art component, that means the core project isn't completed and ready to take home until the end of the final session. If it's a tech or coding program, that means we don't show off the programs we've created until the end of the final day. When all activities in the boot camp build up to a final product or show-and-tell, motivation to participate throughout is strong.

As I mentioned, we've been offering this style of boot camps for three school breaks now, with another set coming up this winter break. In all those camps, we've averaged one child dropping out after day one at each camp--usually a kid who wasn't interested in the topic, but whose parent insisted they try it anyway. The rest of the kids are in it for the long haul, really developing their skills, honing their creations, having conversations with their fellow attendees, and overall engaging in deeper learning than we can typically support in a standalone one-hour program.

Curious what topics we've explored with boot camp programs for elementary age kids? My colleague Amy and I have led, and written about, our Scratch Jr. Code Boot Camp (grades K-2) and our Robot Dance Boot Camp (grades 3-5). We also offered a two-day DNA boot camp this past summer (grades 3-5), and we're about to offer a two-day Baking Chemistry Boot Camp (grades 2-5), where we'll explore the chemistry behind basic baking skills while making three yummy baked goods. Other staff have also lead camps on puppetry (sessions for grades K-2 & 3-5); enchanted forest-themed games, crafts, and stories (grades 1-4); basic video editing (grades 4-6); 3D printing (grades 3-5); and simple sewing (grades 3-5). Some colleagues will also be offering a winter nature-themed boot camp in early January, with sessions for grades K-2 and 3-5.

We've found this boot camp model to be really successful at our library: there's always plenty of interest (so that we're thinking about when to repeat camp topics); staff are invigorated to create boot camp activities in areas of their own interest and expertise to share with kids; and kids themselves get elbows-deep in a topic they might be exploring for the first time. When a kid can walk out of the final day of boot camp excited about coding, or proudly holding a handmade puppet or terrarium, or with a link to the video project they made, they're leaving the library not only with a new skill, but with a new interest that can connect them to even more exploration and learning in the future. That's why we offer boot camps for kids.

Monday, September 11, 2017

How do you measure engagement? What we learned during Summer Reading 2017

Starting with this year, my role in summer reading is a specific one: I work at developing and implementing our measurement tools and strategies for capturing program outcomes and other evaluative data. To that end, the reading clubs committee's meeting that matters most to my work is the one where they confirm the program goals for the year. Not the theme, not the design of the program--the program goals. Once I've got a grasp of what the goals are, and why they've been chosen, I start thinking about how we can collect data to help us gauge our success.

For Summer Reading 2017, which here at Skokie was called Camp Imagine, our reading clubs committee identified two main goals of our program:

Goal 1: Increase the completion rate by 10%. This goal is a change in outputs--the proportion of registrants who went on to finish the program this summer as compared to the proportion from last summer. You need two data points to get your completion rate: the number of people who signed up for the program, and the number of people who completed the program. This is pretty standard data to collect for reading programs across the country.

Goal 2: Increase engagement. This goal is both a behavior in itself, which is an output, and a change in behavior, or an outcome. In other words, the reading clubs committee was designing a program that would hopefully result in more people participating across the whole summer as well as engaging with both the program and the library at greater rates than they might have otherwise.

(Also, a note here: we didn't attempt to measure engagement last summer, which means we didn't have a baseline to compare this summer's numbers to. As a result, we were actually measuring our success toward the goal of facilitating engagement, as opposed to increasing engagement year-over-year. We can track that next year, using our 2017 numbers as our baseline for comparison.)

Now, you may be thinking to yourself: but how do you measure engagement? That, my friends, is a really, really good question. And there's not necessarily a single best answer. You see, measuring engagement starts by first identifying what engagement means to you with regard to your specific program or service.

So pretty much immediately after I got word that these were the goals selected by the reading clubs committee, I went to them with some questions. Specifically, what did they have in mind when they envisioned patron engagement? That's when I learned that they envisioned Camp Imagine participants sticking with the program throughout the summer; doing the program with some specific participatory behaviors; using the library in new ways; and interacting with one another and the community around books and reading. That may still seem somewhat amorphous and abstract, but this definition gave me plenty to work with--especially since the committee was designing the program so that participants could choose specific activities and challenges that tied into the engagement goal.

That length-of-participation type of engagement--sticking with the program throughout the summer--was measurable by seeing how many days, on average, Camp Imagine participants were active in the program. We tallied the number of days between registration and final visit to the Camp Imagine desk for 20% of our participants (a solid sample, considering we had over 3700 participants). This was a straight output numbers analysis.

Because of the way we implement our summer reading program and the volume of participants involved, we knew we'd stick to a participant survey to capture most of the rest of the data to get at whether we were actually facilitating engagement. With the goals and definitions in mind, I got down to creating our survey questions to measure engagement.

The reading clubs committee was defining engagement as participating in a few direct ways with the program. That's not the same as just participating in the program. Rather, a specific engagement activity option was available at every level of Camp Imagine: to submit something to the Camp Mailbox. It might be a book review; a picture or drawing of a friend; a favorite recipe; a map of places traveled, or hoped to travel; and more sharing prompts like these. We had a physical Camp Mailbox just next to our Camp Imagine desk, where all registration and badge completion interactions took place. The program was designed to give participants the option of engaging directly with the program and program themes by submitting to the Camp Mailbox, and this type of participation was incentivized when we featured Camp Mailbox submissions on a display next to the camp desk and on our Facebook page. And so, when it came time to think about quantifying this type of engagement, we asked two specific questions on the summer reading survey:

1) While participating in Camp Imagine, I submitted an item to the Camp Mailbox. (Options: Yes / No / Not Sure)

2) While participating in Camp Imagine, I looked at items submitted to the Camp Mailbox on display in the library or on the library's Facebook page. (Options: Yes / No / Not Sure)

The next definition of engagement was using the library in new ways. Once again, this type of engagement was built into the design of the program as an option. At every level of the program, participants were invited to find relevant information in a library database or resource; try reading in a new genre; set up an account with and try a downloadable resource like Hoopla or RBDigital; and more. To get at whether participants were in fact using the library in new ways, we asked one specific question on the summer reading survey:

3) While participating in Camp Imagine, I tried a library resource I'd never used before. (Options: Yes / No / Not Sure)

Finally, the last definition of engagement was that participants would ideally interact with one another and the larger community around books and/or reading. Again, this type of behavior was incentivized, with many levels of the program including the activity of sharing a book or other media recommendation with another person--be it friend, family member, librarian, or anyone else. This type of engagement is also where my initial research into measuring engagement--specifically during summer reading--paid off. I had explored write-ups and descriptions of how California libraries have been tracking summer reading outcomes since 2011, and that's where we landed on this concept of engagement with reading programs as resulting in participants considering their communities as places valuing reading. With the combination of our program-incentivized sharing and the concepts learned from California's outcomes iterations, we asked two specific questions on the summer reading survey:

4) While participating in Camp Imagine, I shared a book recommendation with a family member, friend, or other community member. (Options: Yes / No / Not Sure)

5) As a result of participating in Camp Imagine, I feel that Skokie is a community of readers. (Options: Strongly Agree / Agree / Neither Agree nor Disagree / Disagree / Strongly Disagree)

The survey was rounded out by questions from the PLA Project Outcomes summer reading surveys--we feel it's important that we contribute to the project, which provides its aggregate data for library advocacy across the country, whenever it fits into our own strategies and plans for capturing program outcomes.

Our final survey was one half-sheet of paper, front and back, and we got a response rate that allowed us a pretty high degree of confidence in the data we collected.

Curious what we found?

Our graphic designer Vanessa Rosenbaum created a gorgeous 4-page final report for Camp Imagine. Here's the page about engagement. I put together the data analysis and narrative, and the sidebar includes items from the Camp Mailbox and anecdotes.

From analyzing the length of time that Camp Imagine registrants engaged in the program (which was 73 days long in total), we found that:
  • Youth who earned at least one badge (i.e., didn't just register and never return) participated on average 34 days, or about five weeks.
  • Teens who earned at least one badge participated on average 29 days, or about four weeks.
When we looked at the top 50th and 25th percentiles of youth and teen participants, these numbers were even higher. This type of sustained engagement has implications for combatting the summer slide.

From our survey questions, here's what we learned about program engagement:
  • 84% of participants submitted something to the Camp Mailbox.
  • 61% of participants viewed the Camp Mail.
  • 65% of participants tried a library resource they'd never used before.
  • 75% of participants recommended a book to a family member, friend, or other community member.
  • 94% of participants feel Skokie is a community of readers.
Now, considering participants had the option, at every single level of the program, to complete each level in the traditional summer reading way--by reading alone--we think these numbers are pretty remarkable. In every metric, over half of program participants opted to participate in our engagement activities alongside the traditional reading activities. And the fact the 94% of participants feel our community is one of readers? Well, that makes a librarian's heart happy. And these data all provide solid baselines for continuing to measure engagement over the course of subsequent reading clubs.

So that's how we did it: that's how we measured engagement during Summer Reading 2017.

Saturday, July 8, 2017

Program Kits for Summer Bookmobile Pop-Ups

Two summers ago, I wrote about our forays into pop-up programming for the summer. I mentioned in that post that I'd be assessing this volunteer-staffed, in-the-library, weekly activity model for the following summer, and my youth program team and I definitely considered all aspects of the program. What we ended up with last summer was an in-library program schedule with a much higher volume of activities--which meant we could focus our pop-up energies elsewhere. And so we did: to the bookmobile. What we discovered last summer is that we can pretty simply pop-up with activities for bookmobile patrons, but we have to provide staffing for every pop-up; our bookmobile staff see such a high volume of patrons at each stop that they can't lend one of their regular staffers to lead an activity.

So for this summer, we considered what we'd learned, then iterated again. For 2017, we're sending activities out to the bookmobile once again. Since we know providing staffing is vital, and scheduling staff can be difficult during the summer, we needed to be really flexible to ensure that any staff member could confidently and competently lead a pop-up program with the bookmobile, little advance training necessary. And so we created Bookmobile Program Kits, each with video instructions that we can play at the pop-up.

Ribbon Cutting for New Bookmobile by Skokie Public Library, CC BY-NC-SA 2.0

See, we got a magnificent new bookmobile a little over a year ago. This gorgeous library on wheels boasts an exterior awning, under which we can set up a folding table and lead activities. Also under the awning, beneath a panel on the exterior wall of the bookmobile, there is a flat-screen television. Which means we can play videos while we're set up under the awning leading activities. Thus our idea of how-to videos for bookmobile programming was born.

This summer we've got five different bookmobile program kits available, each with a bin full of supplies and a how-to video featuring a library staff member of community volunteer who has experience leading that same activity in a formal library program or space. Our kits, all with a STEAM connection:

We've only made a few pop-up appearances with the bookmobile so far this summer, but word is they're going quite well. The short videos help to reinforce to bookmobile patrons that they can participate in the activity even if they only have five minutes--we've found most patrons plan enough time for browsing and checking out books, but don't plan to stay for a longer activity. Staff leading the activities have shared that the kits work well logistically: all the supplies are easily accessible, the activities don't require a huge amount of space for patrons to engage, and the video allows ease of participation even when there's a larger size group of eager participants. And no one forgets how to do an activity, because you have the how-to on a loop.

At this point in the summer, we're considering these Bookmobile Program Kits a success. And a bonus: we didn't specifically brand the how-to videos as pertaining to the bookmobile pop-ups, so we can reuse them in so many potential programs and spaces in the future.

Tuesday, March 21, 2017

Actually, She Did That: The Civic Lab for Women's History Month

The team of folks here at my library who curate the Civic Lab were having a meeting a few weeks ago where we were discussing potential topics for future Civic Lab pop-ups. Sometimes we tie our pop-ups to formal programs on our calendar, sometimes to topics in the news, sometimes to installations in the library, and sometimes to specific days or months of import or conversation. We were brainstorming what topic to focus on for Women's History Month, and we had plenty to choose from--there's a lot going on right now affecting women, have you noticed? You might be surprised, then, to hear that the person who came up in conversation was Kanye.

Or maybe you're not too surprised, because he came up in the context of one particularly annoying and eye-roll-inducing line from Famous: "I made that bitch famous," said in reference to Taylor Swift. As if he, a man, made her, a huge pop star who is a woman, famous because he physically took the stage and microphone away from her while she was winning an award. Gross.

And so we had our topic for the Civic Lab for Women's History Month: women who have accomplished something, but who do not get their deserved credit (often it goes to a man or group of men), or they are better known for something irrelevant to their accomplishments.

We called it "Actually, She Did That"--taking the mansplainer's favorite opening word of "actually" and shedding light on some excellent women throughout history whom many do not know and whose accomplishments have been snatched from them.

The central activity in "Actually, She Did That" was a game of sorts. On a column constructed out of our multipurpose crates, we affixed large images of 11 different women who fit our criteria stated above. (As one of the mother/daughter participant pairs said, these 11 are only the tip of the iceberg when it comes to women not getting the credit due to them.) Each image included the woman's name and date of birth (as well as death, where relevant). On the table next to the column, we had 11 slips of paper. Each slip noted the accomplishment of one of these women, with a parenthetical about how or why she hasn't gotten credit for that accomplishment. The goal was to try to match the woman to her accomplishment, learning more about these 11 fantastic women along the way.

Our 11 featured women were:
  • Nellie Bly (1864-1922) - Bly was a brilliant, pioneering journalist, despite popular opinion that she couldn't be a good journalist because she was a woman. Bly was an early undercover investigative journalist, checking herself into a mental asylum and writing articles exposing the despicable treatment of (mostly female) patients in these facilities.
  • Selma Burke (1900-1995) - A sculptor, Burke was the artist behind the FDR profile that was used on the dime. Yet the (male) engraver typically gets credit for the design, rather than Burke.
  • Laverne Cox (1984- ) - Cox is the first transgender actress to be nominated for an Emmy in an acting category. Yet despite her talent and prowess as an actress, much media coverage of Cox returns to questions about her gender assigned at birth--regardless of its lack of relevance to her career.
  • Rosalind Franklin (1920-1958) - Franklin's research led to her discovery of the double helix structure of DNA. Her male lab partner stole her findings and gave them to Crick and Watson, who went on to win the Nobel Prize for DNA discoveries.
  • Katherine Johnson (1918- ) - One of NASA's "human computers" whose supreme math skills allowed early astronauts to safely start to explore space, Johnson and her colleagues have only recently started to get recognition due to the book and film Hidden Figures.
  • Regina Jonas (1902-1944) - The first female rabbi, Jonas was refused ordination for years despite having gone through the same training as her male colleagues. She was finally ordained before being sent to a concentration camp. She died in Auschwitz.
  • Hedy Lamarr (1914-2000) - Lamarr was a brilliant inventor, developing spread spectrum communication and frequency hopping technology which are now the basis for cell phones and wi-fi. Yet she is often known only for being a beautiful actress.
  • Ada Lovelace (1815-1852) - She wrote the first computer program, although her male friend Charles Babbage is usually credited as the first computer programmer. Lovelace is usually first credited as daughter of Lord Byron. So not only does she not get credit for what she did, but she's defined in relation to her male relative.
  • Wilma Mankiller (1945-2010) - Mankiller was the first female chief of the Cherokee Nation. Many American history texts ignore her leadership and maintain there has never been a female head of state in the U.S.
  • Arati Prabhakar (1959- ) - Prabhakar was the head of DARPA, the Defense Advanced Research Projects Agency, from 2012 until January of this year. Research and developments under her watch have included huge strides in biomedical technology like prosthetics. Credit is typically given to the presidential administration at the time of the invention.
  • Chien-Shiung Wu (1912-1997) - Called the "First Lady of Physics," Wu worked on the Manhattan Project. Her work in nuclear physics won a Nobel Prize for her male colleagues, but she was not recognized. Even though the winning experiment was called the "Wu Experiment."

We had some really wonderful conversations with patrons as they engaged in this activity. Many recognized a few names or pictures, but couldn't place their finger on where they'd seen or heard of these women before. We share biographical facts with participants, many of them shaking their heads in frustration at just how common this type of credit-stealing is. One teen girl, participating with a friend, remarked after hearing the stories of several of the women, "Why do they keep giving away credit?" We talked about how it wasn't a question of these accomplished women giving away credit, but rather them having credit taken from them or given to someone else. These teens got mad. They demand better, for the world to see them and their friends and other women. As it should be.

Alongside this activity of matching women to their accomplishments, we also had a few other elements available for Civic Lab participants. We had a number of great titles on offer for folks interested in learning about more women and their accomplishments, including:
  • 50 Unbelievable Women and Their Fascinating (And True!) Stories by Saundra Mitchell, illustrated by Cara Petrus
  • Bad Girls Throughout History: 100 Remarkable Women Who Changed the World by Ann Shen
  • The Book of Heroines: Tales of History's Gutsiest Gals by Stephanie Warren Drimmer
  • Dead Feminists: Historic Heroines in Living Color by Chandler O'Leary & Jessica Spring
  • Rad American Women A-Z by Kate Schatz, illustrated by Miriam Klein Stahl
  • Wonder Women: 25 Innovators, Inventors, and Trailblazers Who Changed History by Sam Maggs, illustrated by Sophia Foster-Dimino

We also put together a handout with resources for hearing more women's stories through an email newsletter, podcasts, and online videos. (See the handout here.)

The handout also includes three questions to get folks considering the stories of women in their own lives, as well as how they can make space to hear and share the stories of women:
  1. What have women in your life accomplished? Have they gotten credit for these accomplishments?
  2. What would you say to them in acknowledgement of what they have accomplished?
  3. How can you help to share the stories of women and their work?

We intentionally posed that first question on one of our crates, and we provided sticky notes and pencils for participants to weigh in. During the two hours a coworker and I facilitated "Actually, She Did That," however, no one wrote a response to the question. We don't think it was from lack of interest, but rather from the greater appeal of learning about the women whose images were front and center in the installation. We're hopeful that the public question, as well as the handout, provided fodder for reflecting on the women in participants' lives.

Monday was appearance number one for "Actually, She Did That." We'll be popping up again this Friday, and we're eager to see what types of interactions are prompted this time around. From there, we want to think about how to continue this idea of making clear space for women and women's stories beyond just Women's History Month.