Sunday, October 6, 2024

Google's NotebookLM

In the last several months, Google has noticeably stepped up its artificial intelligence AI game to compete with ChatGPT and others.  However, it is Google's latest release, NotebookLM -- still considered to be in its "Experiment," or beta, phase -- that is truly a showstopper.  

Your ability to access NotebookLM may depend on your Google Admin's controls, as well as the typical slow rollout of new Google apps to domains.   As I've done PD and demonstrated the tool over the last few weeks, I'd say half of the school districts in the room could use it, and half could not.  If you currently cannot access it, check back weekly and/or put in a ticket for your head IT admin.  (Like many optional Google products, your Google admin can choose whether only teachers should access it or if/when students can.)

How does it work?

Using NotebookLM requires being logged into your Google account.  After going to the platform's website, you are asked to create a new Notebook or play with several pre-made examples.  (After you create your first Notebook, you will go straight to a page where you can edit/view your Notebooks, create a new one, or explore the examples.)  From the beginning, it's important to point out that Notebooks created in educational domains may not be able to be shared or viewed outside your own district -- at least for now.  (Interestingly, that's not an issue for Notebooks made by a personal Google account.)


For a new Notebook, the first step is sharing resources.  Those fall in four buckets: uploading a file from your hard drive (for example, a PDF, TXT or MP3), sharing a Google file from your Drive (currently, only Google Docs and Slides), a URL of a website or YouTube video, or pasting some copied text.  There is a limit of fifty (!) sources you can share for a single Notebook.


After some seconds of analyzing your sources, you land on the "home" page of your Notebook, called the "Notebook Guide" (see the blue asterisk/text in the lower right, where you can switch back and forth from this to your "Notes" page).  At this point, you can edit the name of the Notebook at the top.  At the bottom, you see a field to put in your own questions or tasks about your sources.  It's important to note this is not a search engine; you're using the power of Gemini AI to delve into just your chosen sources!  You also see four other choices for exploration:

  • Help Me Create:  several buttons to autogenerate certain structures related to your sources, such as a FAQ, Table of Contents, and a Study Guide.  Note that some of these, such as the Table of Contents, actually generates a ToC for each source, not everything you've attached.  Generated content goes on your "Notes" page.
  • Summary: An overview of your sources.  
  • Suggested Questions:  Notebook suggests three possible inquiries of your sources using Gemini AI to answer if clicked.  These can become part of your "Chat History" alongside any other custom questions or tasks you posit; this history is viewable by clicking the lower left text ("View Chat").
  • Audio Overview:  this feature is too mind-blowing to just whiff over, so more on it in a moment.


On the Notes page, you see potential saved responses as discussed above, or you can "Add Note."  At first, this may seem like a typical text note, but when completed, you can checkmark one or more of these notes and blue buttons will appear below to "Help Me Understand," "Critique," "Suggest Related Ideas," or "Create Outline."  These are, of course, all AI powered.   Note the Gemini-powered input bar to type questions or tasks remains below as before.

Are you ever annoyed that AI provides output of info but you don't know where it came from?  That was an early complaint I had about ChatGPT (although to be fair, some later genAI like Microsoft Copilot has such transparency built in). NotebookLM tackles this by not only adding footnotes to its outputted inquiries to show its source, but with a click, you can go to that part of the source document!

Hover over the number, and its location in the source is previewed...


...and if you click the number, the source appears on the left, going straight to where it occurs (in this case, a particular Slide).

We now have to finally address Audio Overview.   While all of the previously described aspects of NotebookLM are pretty impressive, it's Audio Overview that gets the most oxygen when people discuss the tool.  In the upper right of your "Notebook Guide" page, hit "Generate" to create a "deep dive conversation."  This can take a few minutes to render.  Once ready, hit play, adjust the playback speed if you like...and I dare you not to close the lid of your laptop like I did, just to process what I just heard.  Google pointedly does not call it a "podcast," but it's hard not to think of one when you hear two hosts who sound very much like a man and a woman talking about the sources you've attached -- complete with informal interjections, thoughtful pauses, stutters, human-like rise and fall of intonation, and even laughter.  (It's also not a perfect simulation -- over the course of 8 to 10 minutes, there are several strange pronunciations, moments of flat affect, glitchy audio, and/or sound dropouts.)  I've marveled at quite a bit of AI over the last few years, but this feature had me floored.  Another interesting thing is that Google has apparently trained its AI on a significant amount of podcasts to recognize certain algorithms. For example, I created a Notebook around one PDF of how a teacher uses TTRPGs in his own classroom.  I never explained what a TTRPG was in the PDF, as this was considered given schema in its original context.  And yet, the Audio Overview took pains to define what a TTRPG is at the start, just as you would likely expect in a typical podcast for a general audience.  At the end, it even dreamed of other ways you could use TTRPGs in other classrooms, although again, that was not anywhere in the original PDF.   Last but not least, when the Audio Overview uses the second person, the audience is clearly "you" -- that is, the Google account who made the Notebook.  It's not a human podcast, but it may be the world's first personalized learning, machine-generated podcast.

Remember that you cannot currently share Notebooks outside of your educational domain.  However, click the three dots for the Audio Overview and you have the option to download it as a WAV file, which could then be shared however you like.  (Again, rules are different for personal Google accounts; in fact, NotebookLM allows you to create a URL you can share so that anyone can hear your Audio Overview streaming within a browser.)  Here's an example of an Audio Overview where my Notebook's sources included a Google Slides presentation I did at EdSpaces in November 2021 along with an Edutopia article.

Friends, we are in new AI territory.  And remember, NotebookLM is currently beta!

Google provides some informational disclosures on the Audio Overview feature.  The second bullet is particularly interesting, as I've already encountered "extra" information in my Audio Overviews.


Some findings and tips that have emerged as I've done some early (and albeit limited) tinkering:

  • When you originally begin to start adding resources to a new Notebook, the process is different depending on the kind of source.  For example, if you select a hard drive file to upload, it will immediately begin building a Notebook after just one is selected; if it's a Google File, you could conceivably choose multiple Docs and Slides before Notebook creation starts.
  • Once the Notebook is created, you can add or remove additional resources to your Notebook with the slideout panel from the left side. If you add or subtract any resources, however, the only way I've found to "reset" the Notebook with the updated info is to refresh and reload the entire tab.
  • If you click on a single source on the left, Notebook will summarize just that one source; online articles are shown as a simplified version of the text.
  • When connecting multiple sources, the amount of text/info in a long source may tend to dominate the others.  For example, I uploaded a 180 page PDF alongside my deck of 40ish slides, and my slide deck wasn't even acknowledged in the Summary.  When I uploaded the same slide deck with the Edutopia online article, both sources were singled out in the Summary and the outputs more balanced.
  • When you leave and return to a Notebook, previously generated Audio Overviews are not immediately available to play, but have to be "retrieved" with a button push.  This usually is resolved much faster than the original time it took to generate, however.
  • Could you break Gemini's attempt to find patterns and summarize if the sources (at least to human eyes) were random and in no way related?  Maybe.  It might be worth the experiment to see how Notebook perseveres to make sense of such chaotic data.

How could you use it?

From the student side, NotebookLM gives them the opportunity to create a body of content that they can interrogate with inquiry.  It clearly gives struggling students some scaffolding tools to wrestle with new material.  Imagine how much easier research can be if you use AI to scan a long document for particular information!   A teacher at a recent PD of mine pointed out that a teacher could create and share a Notebook as a kind of last minute "sub plans" activity -- have students interact with the notebook, including listening to the Audio Overview.  Since we know AI is not 100% accurate (although the footnoted sources to inquiries are a welcome touch of assurance!), I did a "yes and" to that idea and recommended a reflective analysis where students look for flaws either in programming execution or in the facts presented (by reading/listening to/viewing the original sources), but most importantly, how did NotebookLM better help them understand the content?

Downsides?

The ability to currently share your Notebook only within your own educational domain is a frustrating limitation, although considering that not all domains even have access to NotebookLM yet, it makes sense.  I also wish that if you clicked on a source in the left side panel, the source itself would open in a new tab (at least if it's a URL or a Google file).  Lastly, I think a large "reset" button after sources are added/removed would be more intuitive for a user.  Beyond these technical critiques (which, to be fair, may be addressed by the time NotebookLM goes alpha), AI makes us once again approach another instructional crossroads.  GenAI has already proven itself able to write essays, poems, and songs; what does it mean for students to learn how to synthesize and find themes when they can upload their three social studies primary documents to NotebookLM and produce a summary and a "podcast" within seconds?  The answer may be that NotebookLM can and will be used to produce first drafts where students analyze for errors and extend upon; to be a collaborative research partner; and models that students can learn from to imitate and improve.


I'll definitely be tracking this tool as it goes from "experimental" to full release in the months ahead!

 

This tool was first brought to my attention by Rebecca Simons of Murray State University's Kentucky Academy of Technology Education (KATE).  KATE has been around since 1996, and is a wonderful resource for emerging edtech tools, as well as opportunities for PD.  Consider joining the eduKATE community!

Sunday, September 29, 2024

KyEdRPG Spotlight: Morgan Seely, Bringing Fourth Grade "Learning to Life Through Imagination"

Morgan Seely is a Shelby County Public School (KY) teacher at Painted Stone Elementary, starting her educational career in 2011.   Seely believed strongly in evidence-based practices from the start, but the launch of Shelby County's Profile of a Graduate in 2017 helped spur her journey toward centering student agency and competency-based education in her own classroom.

Playing TTRPGs is a part of Seely's household family fun.  She began considering how to incorporate them into her classroom in the spring of 2024 with her fourth graders.  One student was particularly passionate about the Titanic, which led to a classwide mini-adventure where students experienced the difference between the survival rates of first-, second-, and third-class passengers.  Later in the spring, with the help of her husband, Seely used a simplified D&D gaming mechanic where student characters had to apply their math knowledge to solve quests.  It led to Seely vowing to do a more extensive use of TTRPGs in her upcoming 2024-2025 school year.  Of course, I was delighted to hear all of this!

When we talked in the summer of 2024, Seely shared that she wanted to kick off with a character creation activity, with the plan to play several academic-based adventures throughout the year reusing the same characters.  Once again, her husband helped devise a simplified D&D-like system that seemed appropriate for her upcoming fourth graders.  A student chooses a "species," which automatically creates two statistics of Armor Class (AC, or how hard it is to get hurt) and Health Points (HP, or how much hurt you can take before you have to sit out for the session).  A choice of "ability" -- what is called "classes" in Dungeons & Dragons -- adds a bonus to their AC or HP.   Students choose two items of equipment from a given list or can present their reasoning for something else.  Lastly, the students have to write a backstory biography about their character and explain the significance of their two pieces of equipment.  (Copies of Seely's Slides for directions and her character sheet template is available in a Google folder here.)  

I came early in the school year when Seely introduced me as a guest teacher who knew about a special kind of game where you need to depend on your imagination to co-create stories.  Would the students be interested in playing this game?  (Posing this kind of question is one of the many pieces of evidence proving Seely believes in a community of learners that has voice in the direction of how and what they learn.) The students nearly unanimously said yes.  Cut to about a month later in mid-September, and I came back as an observer and helper for their character creation kickoff activity, with a title Slide announcing that today we would be "Bringing Learning to Life Through Our Imagination."

I was impressed and caught the fever of their enthusiasm from the start.  Imaginations were visibly on display, as students became deeply invested in how their biography, physical description, and explanation of their equipment intertwined.  (Just like adults, coming up with names was the hardest part; I advised a student, "Maybe you could go with Jack Potter as a good wizard name?")  When students were ready, they queued up in line to have Seely help generate a portrait of their character in an artificial intelligence image tool.  This became a great on-the-spot lesson about prompt engineering, as students quickly realized the more details you provided, the better result you got.  A clever tech management strategy that Seely used was digitally distributing the character templates to each student as a Slide that was linked back to a master slide deck for the teacher; this meant Seely could view all the student characters in one place, as well as easily drop her AI image for a particular student's character sheet.  One student (who happened to be Seely's son) led a handful of his peers who had finished early on how to map a dungeon.  It foreshadowed Seely's goal of having student "game masters" lead small groups through adventures in the future.  It should be pointed out that Seely was very mindful of the age and maturity of her students -- she reminded them that "kindness is the most powerful ability you can have" and that whatever obstacles may lay ahead in their adventures, we were "defeating" adversaries, not "killing" them.  As Seely wrapped up the activity and brought them back together as a whole group, she had the students reflect how the day's work was aligned to an academic reading standard.

Even if this character creation activity was the end of the road, it still would have been a worthy learning experience.  But Seely promises to have her fourth graders role-play with these characters in academic quests to come.  I can't want to visit again and see them in action!


Thursday, September 12, 2024

A New Role at OVEC

Back in late August 2022 -- in what seems just an eyeblink ago -- I joined the Ohio Valley Educational Cooperative (OVEC) as part of the grant-funded "Deeper Learning Team."  It's been a journey full of opportunities for learning, partnerships on innovative projects, and celebrations of the innovative teaching in the Bluegrass.  Alas, the grant ends on September 30, 2024, and with it, my Deeper Learning Design Specialist position.

But I am incredibly fortunate that our CEO Jason Adkins, along with OVEC's Board, saw the value of the work many of my grant-funded colleagues achieved, and found it in their budget to create several new positions.  So I'm very pleased to share that on the first of October, I will become OVEC's Digital Learning Consultant!

I strongly believe in OVEC's mission of "empowering educators so students thrive," and have grown to know many amazing colleagues.  To be able to work here is a joy and a privilege.  Besides continuing to be a thought partner and facilitate professional development, I am also thankful to still share the stories of innovative educators of our area, through this blog and my social media.  

I'll end the entry with a link to my newly launched OVEC Digital Learning Consultant website.  (It's also reachable via its shortcut ovec.org/digitallearning.)  If you're looking to schedule a conversation on how I can support you, or register for the latest PDs I'm leading, please give it a visit!   



Wednesday, August 14, 2024

Play Make Learn and Gen Con 2024

In the last month, I've been fortunate to attend two conferences, and from both I've gotten good ideas for digital and game-based learning.  In today's blog entry, I'll share my reflections with you!

Play Make Learn 2024

Play Make Learn is an annual conference first started in 2017, and is held at the beautiful University of Wisconsin campus in Madison, Wisconsin.   True to its name, it's a drawing together of teachers, librarians, crafters, and digital and analog game publishers for the students of Kindergarten through higher ed.   You'll see sessions and displays on everything from makerspaces to video games to looming(!) to tabletop roleplaying games.  I went with my colleagues Jen and Amy; it was the first time any of us had gone, and we constantly remarked on the palpable joy in the air from presenters and attendees throughout the two day conference.

Some of the conference highlights included:

  • The Madison Public Library system shared findings from the beta testing of The Observation Deck, an intriguing platform that captures multimedia evidence of how library experiences impact patrons.  Separate from the platform itself, I appreciated the "Starter Frameworks" that were shared; these could be inspiration for analog observation and classroom walk-through tools for instructional coaches and admin. 
  • Playful Learning Landscapes discussed their community-focused approach to creating urban spaces that center around the "PLL Model" of the 5 Principles of Learning (Actively Engaging, Joyful, Iterative, Socially Interactive, Meaningful), The 6 C's Learning Goals (Collaboration, Communication, Content, Critical Thinking,Creative Innovation, Confidence), and Community Engagement.  I particularly liked their downloadable "Playbook" (available in English and Spanish) with steps on how to implement a Playful Learning Landscape of your own as well as dozens of examples from America and around the world.
  • In a "Library Learning Innovations" three person panel presentation, lots of valuable ground was covered.  Sam Abramovich talked through a brief history of genAI, along with the challenges and power it can bring to libraries.  (He also introduced me to the phrase HOMAGO for the first time.)  Rebecca Teasdale shared a preliminary framework for evaluating makerspaces.  Lastly, Chris Baker (Wisconsin Department of Public Instruction's Public Library Consultant, and the lead organizer of the conference) shared evidence on the power of gaming (video, board, tabletop) in a library; among other resources, he highlighted WISELearn.
  • The Wisconsin Historical Society recently published Wisconsin Adventures, in honor of the 50th Anniversary of Dungeons & Dragons (which was launched from Lake Geneva, about 80 miles southeast of Madison). Wisconsin Adventures uses the same ruleset as D&D (i.e. it is "5e compatible"), and it leads players through quests inspired by urban legends, famous locations, and historical personages of the state.  Even more impressive is how they aligned it to academic Core Content standards.  At the end of the WHS session, its historians led us in small groups through a mini-adventure from their module.  And that was just the first time I played D&D at PML....
  • Jen, Amy and I played a one-shot Dungeons & Dragons adventure designed to help introduce the game to new players.  Jen and Amy may have never rolled the polyhedral dice before, but quickly caught on, and Amy even gave the killing blow to the end boss "Death Tyrant"!




Gen Con 2024

While this was the second time I've attended Gen Con, it's the first time I have presented, alongside Shelby County Public School teacher Justin Gadd.   The session was part of Trade Day, which occurs on the Wednesday before "The Best Four Days in Gaming" of the convention itself.  The title of our presentation was "TTRPGs in Education: Cultivating Creativity and Critical Thinking in the Classroom."  For his portion, Justin did wonderfully as he shared his classroom story of how tabletop role-playing games have positively impacted his students. 

Justin Gadd from Gen Con 2024.
Justin Gadd, middle school teacher extraordinaire.

After Trade Day, I took off work for Thursday and Friday to enjoy some personal time wandering the halls, playing games, and seeing old friends like Dan Reem and Tom Gross from the podcast Teachers in the Dungeon!


I made several connections that could help a classroom:

  • In his session "No Apologies: Arguments and Examples of Analog Gaming as Effective Tools in Secondary Education," Shawn Thorgersen (a grade 7-12 English teacher and Assistant Principal in New York, as well as a PhD student at St. Johns University) discussed evidence-based research on the topic and some examples from his own classroom.  Thorgersen's slides had a quote from S.S. Boocock that particularly lifted my heart:

  • Dr. Katie King, from the University of Massachusetts Lowell, led a session on "Using a TTRPG as a Professional Development Tool." She shared a project she is piloting, where grad students in a teacher prep program have their teacher "characters" engage in TTRPG scenarios as led by a "Classroom Master."  As Dr. King pointed out, such role-playing is closest to the active form of student teaching and therefore more effective and authentic than the more passive learning of classroom observation or article reading.  It definitely got my wheels turning on how a TTRPG could be the engagement model of a professional development session or series.  
  • Limitless Adventures has a new solo adventure gamebook based on D&D 5e rules called Lost in the Dark. The book leads you through character creation, and from there, advances you through the story complete with dice rolls and monster encounters.  Great for a classroom or school library for students to check out, or for enrichment time. 
  • The Story Engine is a company that makes various card decks & expansion sets that can help build a narrative or world, with simple and intuitive rules.  While originally made for TTRPG players or GMs, the decks can be excellent for brainstorming fiction, understanding geography, or collaboratively practicing what it's like to co-create a story.  Additionally, they have free lesson plan resources!  (While most of the lessons are aligned to middle and high school standards, others -- like the "9 Storytelling Activities" handout -- could be for any age group, depending on their literacy level and the supports given.)
  • I first met Tim Beach and his Start Here Roleplaying Game back at the GAMA Expo in March.  He had a presence at Gen Con too, where (as a Trade Day educator) I was able to get an early release of its box set gifted by Tim.   The purpose of the game is to get people playing as quickly as possible, and from what I've seen of the game mechanics, this succeeds!  The box set comes with both simple and expanded rules, along with several genre settings, from fantasy to "Wizards & Wranglers" (think Weird West) to "Zombiesaurus Rex" (think Jurassic Park mixed with The Walking Dead).   The game should be available for ordering very soon.  Until then, check out the Beach House RPG Facebook page for updates.
  • I've been a fan of 9th Level Games for a few years now, and I was excited to examine a new TTRPG they are about to publish aimed at elementary aged children: Venture Society.  As their Kickstarter puts it, Venture Society is an "all ages, non-violent RPG focusing on building communication, social, & emotional skills" which utilizes 9th Level's "Polymorph" simple game system.  It's available for pre-order and coming out this fall.
  • I will end with three words: Dice Petting Zoo.  Well...perhaps that requires a bit more explanation.  Tom Gross and I discovered a handcrafted wooden tray with dice at a booth advertising some D&D "stay and playcation" opportunities (Nat21 Adventures).  The idea was so whimsical that I immediately began brainstorming how a Dice Petting Zoo could be a delightful addition to a classroom.  It could provide an opportunity for calming and building culture. Imagine naming the various polyhedral "animals."  Or having students whisper numbers to them as they try to solve a difficult math problem.  Or for those that need to fiddle, taking them for a "walk" (roll!).  
A wooden tray with "Dice Petting Zoo" painted on it.


Of course, there was so many great experiences, tools, and products at Play Make Learn and Gen Con that this entry can only be considered a fly-by.  For educators looking for inspiration on how game-based learning can work in a classroom, or need convincing about the power of play, I highly recommend checking out either or both conferences! 

Thursday, August 1, 2024

Ten Years of Edtech Elixirs

Welcome, and thank you for checking out my blog!  I have been involved in education since 2005, but never seriously tried to blog before.  My "About Me" profile sums up my educational background succinctly, but I will add that becoming the first District Technology Integration Coach of Shelby County is definitely the motivation to begin blogging now. Therefore, I am committed to making meaningful and ongoing posts, but most importantly, making my blog a useful resource for others.

That's how I started my first Edtech Elixirs blog entry exactly ten years ago, on August 1, 2014.  It was a time of transition.  I had thought I would be a classroom teacher forever, but an opportunity beckoned in another county and I began a district position that summer.  I had the humblest of ambitions for the blog.  It would be a place for me to collect resources for Shelby County Public Schools staff, perhaps opine here and there on education, and frankly, save time: when I thought a tool or strategy might help someone, I could simply drop a link to the appropriate blog entry in an email and click send.

But to my endless astonishment and appreciation, the audience for Edtech Elixirs kept growing and growing outside of Shelby, continuing into my newest job with OVEC that started in 2022.  Along the way, I have blogged about some educational highs (like getting mentioned in Star Wars Insider!) and some lows (like when thousands of Kentucky educators got sent home in the middle of a conference to begin sheltering in place).  It also has led to other writing opportunities.  I've had my entries republished (thank you Aurora Institute's Laurie Gagnon and Eliot Levine!), but Edtech Elixirs has also opened doors to writing original articles (again, thanks to Laurie and Eliot, as well as Kristen Vogt from Next Generation Learning Challenges, and others).

Some milestones from 2024 to share:


Lest anyone feel this entry is merely for vanity and bravado, I want to make my two purposes clear. I won't deny I'm proud (and surprised!) to still be writing Edtech Elixirs after a decade, and want to celebrate that fact, but capturing some of the numbers above is just a chance at statistical reflection; years ago, I was shocked when I hit a thousand views, and never thought writing even a hundred entries was possible.  It's been an amazing, humbling journey!  That leads to the second (and main) purpose of this entry, which is to give thanks. When I hit "Publish" for the first time ten years ago, I never could have dreamed this journey would still be continuing in 2024 -- and it is a journey only possible because of YOU.  Every reader out there who ever took a few minutes to read my words, or shared/reposted a blog entry, I cannot express enough gratitude.   

There are some announcements I'll be excited to share very soon on what's next on my horizon.  For now, I'll simply repeat my thanks for all the edtech tools I've been able to play with, for all the learning partners I've made, for all the fascinating books I've read, for all the people I've had the good fortune to meet and be able to celebrate their stories, and once again, for all the readers.  Here's to the next decade, in which I hope to remain "meaningful," "ongoing," and above all, "useful."  Stay tuned!



Saturday, July 13, 2024

Snorkl

As a former high school English teacher, I always felt pressured to provide useful, personalized feedback quickly.  Digital tools like the Comment feature in Google Docs became popular just as I was leaving the classroom, and certainly shifted the paradigm when it came to giving "just in time" quality feedback.

Now, thanks to artificial intelligence, we are on the precipice of the next generation of digital feedback that can transform teaching.   For some time now, genAI chatbots like ChatGPT already offer an augmented opportunity to comment on student writing -- if you are willing to engineer a prompt for what you are looking for, then copying and pasting each student essay into the tool, then copying and pasting any AI feedback you wished to share back with a student.  (You can even include your rubric in your prompt, so you can get your feedback with a suggested assessment score.)  This is free and helpful, albeit time consuming and clunky.

But now we enter what I am calling the "refurbished" phase of AI.  In these newest AI tools, the programming is more hidden under the hood, while the interface is much more user friendly and less dependent on prompt engineering.  And that leads us to the tool for today's entry: Snorkl.  While its strength in providing feedback for solving math problems seems obvious, its potential for analyzing reflection and metacognition in multiple content areas is also apparent.

How does it work?

You can sign up free with a Google or Microsoft account.  Snorkl will ask you whether you are a teacher or a student -- I'm not sure how easy it is to change this if you answer wrongly, or are wanting to see what the other side is like before trying to return to your standard role, so choose wisely.

Your home page is intuitively organized with several options.  A "Getting Started" box provides light tutorials for how Snorkl works. Some videos to watch are in "Resources."  You can explore premium plans; although a free account gives you unlimited classes and co-teachers, you have "limited" activities -- an actual number is not specified.  Last but not least, you can create a class where you can then make activities for students (a share link for your activities makes it easy to get students jumping in).

Your Home tab.  Note the tabs across the top for Classes and Library (see below).

What does Snorkl feel like for a student? In the pre-made "Try as a student!" section, several prompts are given so you can experience Snorkl for yourself, and reveal its potential for a wide range of content.  For example, there's one for "Trig Ratios," and another that asks you to "Identify the author's feelings about New Orleans in this 4th grade ELA assignment."  I was intrigued by the "Peanut Butter & Jelly Sandwich" activity -- a classic writing prompt I used years ago with students to show the importance of details when describing "how to" instructions -- so that's the one I chose.

When I opened it up, I first encountered the "Response Whiteboard."  

The activity's instructions are in the upper right, which can also be read aloud to the student.

As you can see from the toolbar at the top of the whiteboard, there are several buttons that allow you to insert text, pictures, and formulas for your response; while you can use the mouse/trackpad to draw, it's probably not as natural as using your finger or stylus on a touchscreen device. (This is the same whiteboard interface the teacher uses when creating the activity prompt for students.)  However, "Record Screen + Voice" is where the real magic of Snorkl resides.  While you can write or draw to your heart's delight before hitting this button, you really can't submit your response unless you make a recording.  That's because Snorkl takes what you say and converts it to text for the next step of the process. Of course, like any screencast, you could draw and type as you talk.  (The first time you do this, you'll be prompted to give permission for Snorkl to use your microphone.)

Once you submit your response, Snorkl uses AI to, in effect, analyze your thinking out loud and tell you how well you did.  The first time I responded I didn't doodle and simply talked through how to make a PB&J.  After several seconds, I got back this screen:

The comments on the right are timestamped; click on the time and you can go straight to that part of the recording.

I have to admit, this had me pretty slack-jawed.  You can play back your recording, with a running transcript in the form of captions.  Snorkl provided some simple overall feedback ("Correct," "3/4 Strong") followed by impressively detailed commentary.  Even the tone seemed appropriate -- Snorkl celebrated my strengths ("Great job on giving a detailed explanation...I love how you even included the part about opening the bread bag") while also providing me ways to improve ("Have you tried using the same knife for both the peanut butter and jelly?").

If you like, you can do another response, with either a clean whiteboard or pick up where you left off.  These additional responses become a history that you can revisit and review, to see your growth over time.

The Portfolio tab shows how Snorkl can be a place for artifacts for learning; for example, imagine a student pulling out a response during a student-led conference with their parents. 



Of course, teachers can review and play the videos of these submitted responses.  Once several responses have been submitted, Snorkl's website suggests that a teacher will be able to see some class-level insights, such as "top exemplars" and "common misconceptions," but it is also possible such insights are only fully available as a premium feature. 


Snorkl calls itself "a versatile tool for all subjects, used from elementary through college."  As an additional nod beyond the "try as a student" examples indicated above, the platform also provides some pre-made activities in several content areas and grade levels in its Library.


The library bank already covers many grade levels, with more "coming soon."


Finally, for a video on getting started with Snorkl, watch the following (3:24):





How could you use it?

Students could be assigned a Snorkl task:

  • as a more interactive type of "flipped learning" homework
  • during class as part of a blended learning station rotation
  • at the end of class as a formative assessment exit slip to determine if they understood today's content (and give the teacher data on what misconceptions to address for tomorrow's class)

Downsides?

Snorkl offers some powerful features for free, so it is hard to find faults with the tool itself beyond just how many activities you can create at no cost, or the possible limitation of the drawing features if you don't have a touchscreen device.  From a teacher's perspective, it truly can become a powerful "instructional aide" (not teacher replacer!) in your classroom.  A more likely negative is about the potential for a teacher to overuse the tool, or use it without also checking student work in person.  As always with edtech -- especially with AI -- practice moderation, oversight (it's not always right), and balance.

Have you used Snorkl, or a similar AI tool?  Share your stories in the Comments below!








Friday, May 31, 2024

KyEdRPG Spotlight: Lexie Bewley-Gilley Bringing Role-Playing to Reaganomics

Lexie Bewley-Gilley is a high school social studies teacher at Bullitt Central High School (Bullitt County Public Schools in Kentucky).  After attending some sessions on game-based learning at KySTE 2024 -- and in particular, how tabletop role-playing could be a part of a classroom -- she was excited to attempt a new lesson near the end of her own school year.  KyEdRPG friends like Chad Collins and Michelle Gross definitely were an inspiration!

Recognizing that student energy is flagging in May, and that her U.S. History unit about the end of the Cold War and Reaganomics was a bit dry in the past, Lexie found an angle to gamify the learning.  She took the element of "rolling up a character" in a TTRPG and had students create a person living in the 1980's.  Lexie leaned into Canva for its presentation and video creation capabilities, alongside AI tools for image generation.  Kicking off with an ElevenLabs-narrated video in full Valley Girl speak, Lexie made a slide deck to guide the students through a series of d20 rolls on random tables, starting with determining their socioeconomic status.  Each random table roll brought a new financial crisis or opportunity that, in effect, shaped the life of their character.  Students collected the narrative along the way on their character tracker sheet.

From a Bullitt County PD session led by Lexie, sharing her lesson.

At periodic moments, Lexie prompted the students to stop and have discussions.  From the perspective of their characters (and those who felt comfortable doing so in first person), students considered and debated how their person would react to the latest event.  The lesson took several days, and culminated in the students individually writing a reflective narrative/essay on the story of their character, making connections to the final days of the Cold War. 

I have seen and shared the idea of using a character sheet from a popular published TTRPG for a deeper demonstration of learning (for example, filling out a sheet from the perspective of a historical person or literary character, then defending your stats and choices).  However, Lexie's commitment of several academic days for this character generation lesson fostered an academically rich opportunity for students to really gain an empathetic POV of a person from another time period. 

Lexie talked about expanding on TTRPG inclusion next year in U.S. History.  Perhaps the students could roll up a time traveller at the start of the course, and several times throughout, the character plays through a scenario in a new historical period -- a little gaming, with a little "fish out of water" reflection!  But for now, it's great to see teachers like Lexie try something new for her students.  I can't wait to see what she'll do next!



Friday, May 24, 2024

Gemini for Google Education: Highlights of their Latest AI Release (May 2024)

It's been a big month for generative AI updates!  Back on May 13, ChatGPT announced ChatGPT-4o (as in "Omni"), a "new flagship model that can reason across audio, vision, and text in real time."  In various short videos, that new "reasoning" was demonstrated in pretty remarkable ways.  Fire up the app on your laptop, and it actively listens and even participate in your meeting (providing insights and summaries), or it can tutor you on understanding a math problem.  Open it on your phone, and it can translate between two speakers in real time, referee two people playing rock-paper-scissors, or even make a new song while harmonizing with a second ChatGPT AI.  Additionally, the lag of input to output is reducing down in time so much that interacting with ChatGPT will start to feel like a human in natural conversation. 

It's another example of ChatGPT seemingly leapfrogging its competition.  Since November 2022 when ChatGPT first broke through to the general public, other major generative AI platforms have come forward, but without as much fanfare.  Microsoft's Copilot has an attractive user interface and, among other features, can generate accurate images inside of itself, allows voice-to-text input, and provides hyperlinks in responses, yet I hardly hear it as a person's first AI platform of choice.  Google's AI went from Bard to the renamed branding of Gemini and was much like Copilot in its features.  While Gemini and Copilot allows uploading of images, you can upload both text and image files to analyze or alter in ChatGPT.  (A quick disclaimer: as you likely already know, AI is sometimes wrong and hallucinates.  When it comes to inputs, I highly recommend trying prompts in multiple genAI platforms and comparing the results.)  

But that was then, this is now.  Gemini may have finally played its ace card with its latest upgrade, announced in a live webinar yesterday.  (By registering, you can get access to the 45 minute archived recording.)  Finally, Google seems poised to lean in on two of its major assets: its near-monopoly of education with its variously tiered domains,  and having the world's most popular cloud-based productivity app suite.

Here are some of the highlights from the webinar.  (Images are screenshots from the webinar unless otherwise noted.)


There is now a premium version of Gemini AI, one that can be incorporated across some of the core Google apps: Docs, Gmail, Slides, Sheets, and Meet.   With the upgrade, AI help is just a button click away.  This integration is likely a game-changer.  Why pop open a new tab for another genAI when Gemini could be built right in?

From the Google course "Get Started with Gemini for Google Workplace."

However, for educational domain customers, Google recognizes there are special needs.  Gemini will now offer, free of charge, several assurances: the data inputted from students and teachers will not be human reviewed, not be used to train AI models, and not be shared outside of their domain. (These three highly sought features are coming "soon," a phrase used several times in the webinar and in its various promotional material graphics.) For educators with privacy concerns, this alone might be a compelling reason for a school district to recommend Gemini over other genAI tools.  Vivek Chachcha, Product Manager of Gemini Education, promises that the AI tool will help "save time," "make learning more personal," "inspire creativity," and help students "learn confidently" (by "empower[ing] students with guided support").  

The webinar included several video examples of Gemini at work in various Google apps, and some shorter excepts of these videos are available separately on its Google for Education YouTube channel.

Although this video shows examples from higher ed, they could apply to anyone needing to increase their productivity and effectiveness (2:58):


In this clip, an instructional coach uses Gemini to draft a professional development session agenda in Sheets (35 seconds):


In this last clip, a teacher inside of a Doc creates a first draft of feedback on a student's poem (47 seconds):


In a related tool revealed earlier this week, Google also has a "side panel" feature that will incorporate Gemini.  This will basically allow the AI to look across your various apps and files in Google Drive in order to complete the task.  (Currently, access to this side panel requires being enrolled in Google Workspace Labs, which will likely need the approval of your domain's admin.)


An example of Gemini's side panel, inside of my personal Google Drive.

Yes, yes, you may be saying, but what about the price?

First, it may be helpful to compare the current free Gemini chat AI (accessible in a separate tab, at its own URL) versus the new paid Gemini for Workspace:

Next, note that these paid licenses come at two different tiers of pricing.  (The upper "Premium" license includes Gemini being able to have "advanced meetings" in Google Meet, a potential nod to the ChatGPT meeting summary/interaction feature mentioned at the start of this blog entry.) The good news: there is no minimum number of licenses you can purchase for users, and there is a discount if you order with a yearlong commitment before August 23, 2024.  The bad news: these premium features do not come automatically with any current upgraded Education domain tier, although Education Plus customers can qualify for a bit more of a discount, again if ordered before August 23.


While the integration of Gemini across your Google apps does cost, it is easy to see how powerful the AI could be for Google Suite learners.  If nothing else, it certainly gives Gemini an opportunity to jump to the front of the genAI competition line!

I'll end this blog entry with some Gemini resources:
  • For tech and IT friends, read this blog entry from "Workspace Updates" (5/23/24) for an official Google breakdown of Gemini for Google Workspace for Education's coming upgrades.
  • If you're interested in more of a general audience narrative of what's happening with Gemini in education, here's an entry from Google's blog on 5/16/24.
  • Looking for even more ways to use Gemini for Google Workspace in education, from Google itself?  Check out these Slides from April 2024.  The resources and tips include advice on writing better prompts, multiple visual examples of how to use Gemini in various apps, and help for domain admin.
  • Google offers some free online courses.  Here's one I just completed myself, and I highly recommend for a low-stress walkthrough: "Get Started with Gemini for Google Workspace."  (It requires logging in and allowing ExceedLMS to access your Google account, but again, it's free!)
One last thing: have a great summer and enjoy some time off!