• Skip to main content
  • Skip to footer

Center for Teaching & Learning | Baruch College

  • About Us
    • Our Team
    • Appointments
    • Faculty Advisory Committee
    • Contact Us
  • Events
  • Resources
    • Pedagogy
    • Technology
    • Research
    • AI Resources
  • Appointments
You are here: Home / Pedagogy / ChatGPT & Its Impact On Teaching In Spring 2023

ChatGPT & Its Impact On Teaching In Spring 2023

Filed Under: Pedagogy, Technology February 2, 2023 by Craig Stone

CTL Logo

What is ChatGPT?

CTL’s Position

Try ChatGPT

Recommended Pedagogical Approaches

Rethink Your Assignment/Assessment Choices

Conclusion

Introduction

We’ve been hearing a lot in the news about how the access to ChatGPT will revolutionize our teaching practices. Understandably, faculty have deep concerns about how this will impact student learning and academic integrity. We also have a lot of questions about how much we will have to change our teaching practices and how much time and effort this will take. Many faculty had to do a lot of revision, reflection, and learning in March 2020 and still feel worn out from that experience. While some faculty might view ChatGPT as an exciting new opportunity to enhance learning, others may feel wary and overwhelmed by its possibilities.

The following is our attempt, in February 2023, to synthesize what we’ve learned, and make some targeted suggestions to faculty for the Spring 2023 semester. Some of our suggestions should be pretty fast to implement, and others more time-consuming.


What is ChatGPT?

ChatGPT, the acronym for “Chat Generative Pre-Trained Transformer,” is a web-based “chatbot” that responds to user prompts about any topic. The application’s creator, OpenAI, says that ChatGPT can “answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”

In practice, ChatGPT returns seemingly educated responses on a wide variety of topics. It can spot errors in programming code, craft an email, construct a resume, write an essay, or answer multiple choice questions, among many other text-based capabilities. It can also alter its responses or expand upon them at the user’s request. However, it is important to note that the application itself does not analyze concepts in the way a student would while writing a paper, or a faculty member would while doing research. The New York Times technology columnist Kevin Roose explained that ChatGPT works, “in extremely oversimplified terms, by making probabilistic guesses about which bits of text belong together in a sequence, based on a statistical model trained on billions of examples of text pulled from all over the internet” (“The Brilliance and Weirdness of ChatGPT,” December 5, 2022). Simply put, it strings together words and phrases that are statistically likely to be related and forms them into grammatically accurate sentences and structured paragraphs. And while it does that very well, it has several notable shortcomings. Among them are: lack of depth, repeated responses, and fabricated citations.

ChatGPT was made available for public use in November 2022 and is still in what OpenAI calls a “research preview.” While it’s currently free to use, it is unclear if, when, and how the application will be monetized in the future.


CTL’s Position

It’s incredibly important to remember that we are now in a moment when we are reacting to a new technology that is receiving a lot of attention. Faculty should first try the technology and then draw their own conclusions as to how it may impact their teaching and student learning in their classes.

We encourage faculty to use the introduction of the ChatGPT tool as an opportunity to reflect on their teaching practices and broaden learning rather than to immediately jump to simply monitoring student assignments through one of the new ChatGPT detectors.

We therefore recommend a pedagogical approach. Our position is motivated by our mission to reflect upon pedagogical opportunities and the enabling role of technology in education. At the CTL, we encourage incorporating technology as part of our pedagogy, not simply as a solution to any given classroom problem. This includes an honest and critical assessment of educational technologies that also takes into account the pedagogical environment our faculty and students work in. That environment is always evolving, leading us to think even more critically (and creatively) about the role of technology in education.

The rest of this document shares our experiences investigating ChatGPT, as well as the observations of many of our Baruch colleagues.You’ll see that within a short amount of time there has already been thoughtful exploration, constructive advice, and an array of perspectives. At the end of the document, you’ll see who has contributed.


Try ChatGPT

Our first recommendation is for you to see what ChatGPT is for yourself and to reflect on your experience with the technology. To get started, visit the site where it is hosted. We suggest having on hand one of your assignment prompts or test questions. Enter these course materials into ChatGPT and look at the results.

Some things to consider:

  • How easy was it to use?
  • What do the results look like? Do they look convincing, like a student’s work in your class? Is it obvious that it wasn’t a person studying your topic in your class?

There are several articles that outline situations in which ChatGPT performs shockingly well and others in which it is obvious that it is Artificial Intelligence, rather than a human answering a question. Following are some observations. As you read through them, consider whether or not this reflects your experience trying ChatGPT.

What is ChatGPT good at?

Here are some quotes faculty in the Management Department:

“ChatGPT is good at definitions, and produces very well written boilerplate / generic text. It is not as good at specifics, but if you prompt it correctly it can weave in the specifics.”

“Because of the way it’s trained on documents, ChatGPT is good at repeating basic knowledge (completing the sentence) but struggles with multi-step reasoning and particularly with common misconceptions (it is most likely, in fact, to repeat the common misconception as fact).”

From Furman University Philosophy Professor Darren Hick, referenced in The New York Times “Tech Fix” column:

“To someone familiar with the material, it raised any number of flags. . . This is good news for upper-level courses in philosophy, where the material is pretty complex and obscure. But for freshman-level classes (to say nothing of assignments in other disciplines, where one might be asked to explain the dominant themes of Moby Dick, or the causes of the war in Ukraine — both prompts I tested), this is a game-changer.”

A faculty member in Accounting tried a range of questions in ChatGPT and concluded that it could provide

  • Explanations – “Explain lease accounting to a 5 year old”
  • Questions
    • Multiple Choice, True/False, Fill in the Blank, Short answer
    • Some numerical questions
  • Open Ended Writing
    • Expository: Write an essay explaining the impact of machine learning on auditing
    • Creative: Write a country song about using ChatGPT in Accounting
  • Coding
    • Write Python code to calculate abnormal returns
    • Find the bug in the following piece of code

When he tried a mix of exam questions that were from the test bank versus custom written, ChatGPT answered over 84% of test bank questions correctly, versus ~48% of custom questions.

What is it not so good at?

1. Repeated Responses

In the simplest example, ChatGPT repeatedly gives the same response to the same question. University of North Carolina Wilmington professor Ray Pastore explained in a YouTube video that he gave ChatGPT the same prompt multiple times over multiple days and always got the exact same answer. This gives instructors a tool to combat cheating: simply by entering their own essay prompts, they will know what to look for. This shortcoming also means multiple students using ChatGPT will turn in very similar assignments. (“ChatGPT for Educators – K12 and Higher Education,” December 20, 2022.)

In fact, ChatGPT provided almost identical answers even when key words in a prompt were changed. A faculty member in the Management Department wanted to see how it performed with one of her assignments. She asked ChatGPT for five strategic recommendations for three different companies: Delta, Southwest, and Amazon. For both Delta and Southwest Airlines, ChatGPT gave the same five recommendations and presented them in the same order, with only minor differences in the text.

For example, this was the first recommendation for Southwest:

Expand international operations: Southwest Airlines has a strong domestic presence, but it currently only serves a few international destinations. Expanding into new international markets could help the company tap into new sources of revenue and increase its brand awareness globally.”

And this was the first recommendation for Delta:

Expand international operations: Delta Airlines has a strong presence in the domestic market, but it could consider expanding into new international markets to tap into new sources of revenue and increase its brand awareness globally.”

Several of the recommendations were repeated verbatim except for changing the company name. And while this may seem natural for two companies in the same industry, ChatGPT also gave remarkably similar recommendations for Amazon. Instead of “expand international operations,” ChatGPT recommended that Amazon “expand into new markets and geographies.” The next four recommendations were the same ones given for Southwest and Delta, but with descriptions tailored for online retail rather than air travel.

2. Citations

ChatGPT also struggles to cite sources, or entirely fabricates them. Since it works by piecing together related words and phrases, rather than searching the internet for answers in the way a human would, it is never pulling information from any one source and cannot create accurate citations. Instead, when it’s asked to provide citations, it seems to fabricate them entirely.

A data scientist in Switzerland told NPR that she made up a fake physical phenomenon to test ChatGPT, and the application responded with surprisingly plausible sounding information, including sources. However, the information and the sources were fabricated: Names of real physics experts were used, but the publications cited did not exist. (“A new AI chatbot might do your homework for you. But it’s still not an A+ student,” December 19, 2022.)

To apply this to assignment design, another Management professor noted:

“ChatGPT does not do properly formatted citations, so requiring them may deter some potential issues (or perhaps even better, requiring specific sources to be used and cited—or having students submit a list of their sources in advance for approval and limiting them to those sources for their paper).”

ChatGPT’s citation capabilities may improve over time, but requiring sources is currently an effective countermeasure—provided you check them.

3. Current Events

The text examples that make up ChatGPT’s dataset only go through 2021, so questions about current events will return inaccurate, generic, or outdated answers. For example, if prompted to answer a question about the causes of the Ukraine War, ChatGPT would generate an answer based upon a dataset that predates the current war which started in 2022.

If ChatGPT thinks that a user is asking about something that occurred in 2022 or later, it will return a message saying its “knowledge cutoff is 2021,” and it will sometimes give this error even if the question is about an event that happened in or before 2021.

Creating assignments that relate to current events should severely limit the usefulness of ChatGPT for students. However, it should be noted that more recent text could (and likely will) be added to ChatGPT’s dataset at some point.

4. Distinct Writing Style

Writing generated by ChatGPT has a distinct tone and style, that is often described as “upspeaking” or “stilted prose.” Some people describe it as “writing too well” and others “bland” or “generic.”

A Management Department faculty member notes that if the communication style of a student in class or in previous assignments is quite different from what is turned in on a written assignment, it can be “a natural red flag.”

You can get to know your students and their writing abilities by assigning low-or no-stakes writing assignments in class. They won’t fully reflect the most polished, academic tone your students can muster, but they will give you a sense of their voices and thought processes. You can also assign response papers and blog posts beginning early in the semester, which will give you an idea what more considered written work by your students looks like. A quick conversation with your students will also show if their level of understanding of the subject matter matches up with the level displayed in their assignments.

Another observation that came up is that while the ChatGPT output is not always accurate, it does appear quite confident in its writing style.

Also, ChatGPT follows some patterns, such as rephrasing the question at the beginning and signposting throughout the response, i.e. beginning with “The causes of the war in Ukraine are…” Though we may encourage students to respond to our questions and prompts in ways that may incorporate some of their language, ChatGPT tends to provide repeated and stilted repetitions of the questions asked of it.

5. Certain Math Problems & Images

While ChatGPT correctly calculates basic math problems, one faculty member noted that it struggles with multi-step problems:

For example, in one of my questions, it reasoned that 5 AM to 10 PM was a 15-hour period. In trying to calculate process capability, it decided the range between USL=2.2 and LSL=2.0 was somehow 0.22 rather than 0.2.”

Another faculty member notes that:

ChatGPT cannot answer questions involving images or separate datasets, although ChatGPT can read a limited amount of tabular information. It may struggle with complex questions involving complicated calculations or the integration of many concepts.


Recommended Pedagogical Approaches

Following are some actions faculty may take. Some of them, such as reviewing your test questions, might be part of your regular course preparation. Others, such as a critical review and revision of your academic integrity statement, might be something you do occasionally. If you want to incorporate many of these suggestions this semester and can do so, great! It’s also good to try a few of them, see how they go, and plan to do more for later semesters.

Emphasize and Clarify Your Academic Integrity Statement

It’s always a good idea to look at how you discuss academic integrity with fresh eyes to make sure it’s clearly communicating your approach and expectations. You can add some language that outlines your expectations with ChatGPT and other technologies.

Remember that expectations around citing authorship can vary greatly according to cultural context, and each discipline may have its own specific conventions for how to incorporate and credit the work of others. This is our way of saying that students will likely encounter different expectations in their courses, so it’s best to err on the side of stating explicitly what is expected in your class rather than assume your approach is universal.

In general, communicate the expectation that students will do their own work, properly cite the work of others, and practice integrity in their coursework, including all assignments and tests.

You can learn more about statements on academic integrity here: official college policies about academic honesty.

Here’s one possible statement, included in Provost Essig’s email on January 16, 2023, if you choose to forbid the use of AI. Ironically, this message was generated by ChatGPT:

The use of artificial intelligence (AI) is strictly prohibited in all coursework and assignments. This includes, but is not limited to, the use of AI-generated text, speech, or images, as well as the use of AI tools or software to complete any portion of a project or assignment. Any violations of this policy will result in disciplinary action, up to and including a failing grade for the assignment or course. Our goal is to encourage critical thinking and creativity, and the use of AI detracts from this objective. Students are expected to use their own knowledge, research and analysis to complete coursework.”

Here’s another possible statement adopted from a faculty member in Natural Sciences:

Use of artificial intelligence tools (AI)

The use of AI in our course underlies the same rules as the use of any other source: it is permitted, but the source must be cited. Any violations of this policy will be reported and result in disciplinary action, up to a failing grade for the assignment or course.

The goal of our course is to gain actual understanding of the natural world. While in the past you might have gotten some credit for paraphrasing sources, this can now be done by AI. In that sense, AI has raised the bar for us humans. It is now more true than ever that you will have to use your own knowledge, research, and analysis to demonstrate real understanding of the physical world in order to earn a good grade in this course.

Here’s another AI Policy that’s circulating on LinkedIn from Ethan Mollick, Associate Professor at The Wharton School of Business:

AI Policy

I expect you to use AI (ChatGPT and image generation tools, at a minimum) in this class. In fact, some assignments will require it. Learning to use AI is an emerging skill, and I provide tutorials in Canvas about how to use them. I am happy to meet and help with these tools during office hours or after class.

Be aware of the limits of ChatGPT:

  • If you provide minimum effort prompts, you will get low quality results. You will need to refine your prompts in order to get good outcomes. This will take work.
  • Don’t trust anything it says. If it gives you a number or fact, assume it is wrong unless you either know the answer or can check with another source. You will be responsible for any errors or omissions provided by the tool. It works best for topics you understand.
  • AI is a tool, but one that you need to acknowledge using. Please include a paragraph of any assignment that uses AI explaining what you used the AI for and what prompts you used to get the results. Failure to do so is in violation of academic honesty policies.
  • Be thoughtful about when this tool is useful. Don’t use it if it isn’t appropriate for the case or circumstance.
  • Here is one possible way to cite ChatGPT:

    “OpenAI (2022). ChatGPT. [Computer program]. Available at: https://openai.com/blog/chatgpt/ (Accessed 13 December 2022).”

In addition to revising your written academic integrity statement, it’s good to read it in class, explain your motivations, and give a chance for a class discussion that includes a time for students to raise questions. Some students may be reluctant to ask a “silly” question or appear like they are trying to “cheat” so you can also encourage them to discuss this in groups and then surface their questions as a group rather than an individual.

Discuss Why Learning Matters

It might sound silly, but you should ask yourself the following question: Why do you find students using ChatGPT problematic? Why might you consider it a violation of academic integrity?

Is it because students who use ChatGPT…

  • have an unfair advantage over others?
  • are misrepresenting their work?
  • make it hard for you to assess if they are learning?
  • are missing out on other opportunities to learn?

There are also many reasons why students might use ChatGPT. One of our Baruch colleagues reflects:

I too have found that the lack of confidence, motivation and study skills creates a culture of panic and urgency, which may lead students to use technology such as ChatGPT. There was much talk in recent years about “learning loss,” and I view things differently. Students have learned a great deal (whether or not it’s what we hoped they’d learned), such as how to use technology to their advantage, and where there’s a will, they will find a way. Perhaps we can redirect that will, effectively empower students, and help them to better understand the consequences of their actions.

So what are the consequences of these actions? Does using ChatGPT reduce the practice students realistically need to gain mastery over a content or a skill? Are students missing out on opportunities to develop their own research, analytical, and writing skills? Are they abdicating their chance to form their own opinions and relying instead on automated material? By misrepresenting what they have learned, will they open themselves up to being embarrassed in the workplace when they don’t know what is considered a basic concept in their field?

Share with your students why learning opens opportunities for their futures, particularly in their careers. It’s tempting for people to get caught up with the efficiency of using a tool to quickly get through a moment, and not consider the longer-term implications of this decision.

Consider taking time at the start of the semester to discuss why learning matters in your course, and that learning can take time and effort. It can be frustrating and rewarding. It can come easily at times and others not. Encourage students to seek out academic support services, ask questions, and emphasize that it’s okay at times to say that they don’t understand something. Creating a learning environment that is supportive of the sometimes messy process of learning is important.

Use ChatGPT to inform the design of your assignments and assessments

There are a range of reasons and options that open to you after running your work through ChatGPT. By letting students know you have done this, you might deter some students from using it.

Following are some additional ideas on changes you might make. One faculty plans to “Create exercises where I show students the output of ChatGPT on one of my questions and ask the students to find the flawed reasoning and explain how to correct it.”

Many faculty plan to design questions or projects to be difficult to solve using ChatGPT. After some of our faculty used ChatGPT and observed the strengths and limitations given their class, they came up with the following ideas:

  • It can vaguely describe a process analysis but it can’t do a process analysis of the deli counter at Santiago Grocery in the Bronx (a real store that one of my students’ families owned).
  • It knows general knowledge but it doesn’t know what we talked about in class. You can tailor an assignment prompt to cases, examples, or frameworks discussed in class, as ChatGPT does not have the context to answer such an assignment correctly. (“Give an example from our class discussion illustrating ___”).

Similarly, you might consider asking students to reference course materials or to relate course content to current events or their own experiences.

Encourage students to practice using AI

As the technology evolves, we hold a responsibility to teach students how to engage productively and responsibly with it, which will include using it and reflecting on its utility and limits. A Baruch colleague reflects:

I can imagine a range of assignments that would elicit just this kind of engagement from students. Students might examine the extent to which AI can execute the genre and style of, say, op-eds published on a particular platform, by comparing “real” examples with one generated in response to a ChatGPT prompt, and use this activity as a basis for deeper exploration of persuasive strategies and style. They might fact check a response written by ChatGPT and assemble a set of annotated sources that verify, complicate, or expose holes in its assertions. They might submit a narration of where and how they used the tool to complete a coding task, where in the process they had to fill in the gaps independently, and how they checked the work to confirm its accuracy.

Make sure to teach your students to acknowledge their use of AI such as ChatGPT in footnotes or references. While no universally acceptable way to cite Chat GPT exists yet, we should do our best to model signposting our use of computer-generated technology. This practice may have the hidden benefit of normalizing ChatGPT as “yet another tech” with its own pros and cons rather than a source of temptation for potential plagiarists.


Rethink your Assignment/Assessment Choices

This is a good moment to take a look at your course learning goals, activities, and assessments. If you have flexibility to change your syllabus, reconsider if your current approach still is the best choice. It’s easy over time to get comfortable using a particular assignment or set of test questions. It can be a lot of work revising and writing a curriculum plan. Sometimes we need to learn a new technological skill or tool. Yet, perhaps there is another way to reach that learning goal? Or to update the mode or set of assignment instructions.

Based upon the results of our COVID-19 Student Experience survey, we know that our students value a course in which there is consistent and clear communication, the assignments and readings are “relevant,” and there is active engagement. Are there moments in your assignment instructions that you share why the assignment/assessment is relevant to their learning and future?

Consider where and how the work is done

There are a growing number of suggestions for faculty to consider. Here are a few:

  • Scaffolding longer assignments, so that students turn in an outline and then a rough draft before the final draft, does not eliminate cheating but requires students to be consistent across multiple submissions in multiple formats.
  • Incorporating an annotated bibliography as one of the deliverables would also require a student to conduct a level of research that ChatGPT isn’t currently capable of doing. If not using a scaffolded approach, instructors could also have students submit a list of sources in advance and require that those sources be incorporated in their final draft, or simply require a list of citations along with the final and then check those sources for validity.
  • A suggestion we’ve seen frequently since the introduction of ChatGPT is to reduce or eliminate out-of-class graded assignments and increase in-class work. Another idea–if you can allot the time in your class schedule–is to return to (or continue using) paper-and-pencil tests and exercises in class.

One note of caution as you make these changes:

Other solutions that seem neutral might change what we’re assessing: timed, in-class, handwritten exams assess a different set of skills (fine motor skills, time management, response to stress, immediate recall, etc.) than at-home, untimed, typed essay exams. I can imagine cases where that’s good—for example, if a student needs to be able to immediately report back an accurate answer without consulting sources—and others that wind up measuring the wrong skill.”

There are pros and cons no matter how one develops an assignment or assessment. The main idea is to be aware of the trade-offs of whichever direction you take.

Technological Approaches

Even though ChatGPT is relatively new, there is already a rush to create ChatGPT detector software. Examples include:

  • GPT-2 Output Detector
  • GPTZero.me

One faculty member reported mixed results when trying them out:

I tried using GPTZero (at present, a free program written by a Princeton undergrad to identify AI- generated content) but have not had any luck with it accurately identifying text that was written by ChatGPT in response to specific prompts (by me).

I did have luck with https://openai-openai-detector.hf.space correctly identifying ChatGPT content (that I asked it to create) as fake.

Several faculty noted the limitations of solely relying on a ChatGPT detector or proctoring software. One faculty wrote

The chat bots created to detect AI can only do it within a probability range. The smart students will use AI to write the essay and then move things around to frame it in their own words. . . (Ideally), the faculty member would understand the student and their writing/communication style versus simply relying on third-party software to detect plagiarism.

Another faculty shared:

Scarily, sources on Reddit/etc. suggest that ChatGPT can also be used to answer multiple choice questions, which is something I hadn’t thought of before. . . but yikes (as someone who teaches and gives tests online. . . major yikes). I wonder if maybe for online exams in Blackboard, for example, we could use a browser lock (Respondus or similar) to block students from opening ChatGPT while an exam is open in Blackboard? Of course students could still use their phones or another device to evade a browser lock but presumably would have to then type in the questions and all options or convert an image to text which would take time—so time limited online exams may be an option.

Many have noted that NYC public schools have blocked ChatGPT on their devices/servers, yet this action has limited impact: it doesn’t help for at-home/online courses on personal devices.

This technology has limited impact. This is why we recommend an approach that focuses on pedagogy rather than relies on technology.


Conclusion

As technology evolves, many of the norms and assumptions about plagiarism, revision, and cheating require interrogation. Some questions that come to mind:

  • What’s the line for using ChatGPT before it’s plagiarism or cheating?
  • Can you use it for ideas for your paper as long as you do the research and write it yourself?
  • Can you use it to break through a bout of writer’s block?

It’s important to remember that there were concerns with the introduction of calculators, Wikipedia, Google Search, laptops, and Grammarly. We have largely navigated these introductions as a society in a process that has required labor and reflection, but arguably enriched the way we think about the evolving potential and limitations of technology.

Similarly to ChatGPT, Wikipedia is valuable as a way to engage with and point to information but does not work as the source itself. And while plagiarism of Wikipedia is easier to catch than content generated by ChatGPT, we hope that the information in this document about ChatGPT’s limitations and potential uses will provide a framework for thinking critically about its impact on teaching and learning.

Our teaching environment and context is always evolving. In some moments they have been problematic, and in others they have opened up opportunities for deep learning and innovation. ChatGPT is another such introduction and it will take some time for us to get acclimated to its broad introduction.


Credit and Context

Included in the Center for Teaching and Learning mission is to “reflect upon pedagogical opportunities and the enabling role of technology in education.” This includes an honest and critical assessment of educational technologies that also takes into account the pedagogical environment our faculty and students work in. As Sean Micheal Morris and Jesse Stommel point out in their essay on critical digital pedagogy, “We are better users of technology when we are thinking critically about the nature and effects of that technology.”

Contributors

The following people were part of the conversations that helped develop this document and/or have pedagogical ideas included. As in any good process where there is healthy debate, the resulting document does not necessarily reflect the opinions of everyone who was part of the discussion. Yet, we think it’s important to acknowledge that this issue is important to many people in our community and many people are engaging in its exploration.

Lauren Aydinliyim, Stefan Bathe, Shiraz Biggie, Donal Byard, Christopher Campbell, Lukasz Chelminski, Raquel Benbunan-Fich, Julia Goldstein, Seth Graves, Maria Halbinger, Diana Hamilton, Catherine Kawalek, Romi Kher, Marios Koufaris, Arthur Lewin, Brandon Lock, Alex Mills, Kannan Mohan, Scott Newbert, Harmony Osei,Glenn Petersen, Rachel Rhys, Allison Lehr Samuels, Christopher Silsby, Dennis Slavin, Craig Stone, Pamela Thielman, Katherine Tsan, John Wahlert

Editors: Allison Lehr Samuels, Craig Stone

A note to our colleagues at other institutions: Feel free to remix this document for your own institutional contexts.

Footer

Site Navigation

About Us

  • Appointments
  • Contact Us
  • Faculty Advisory Committee
  • Our Team

Events

Featured Projects

  • Active Learning
  • Blogs@Baruch
  • CUNY 1969
  • Student Experience Survey
  • Student Learning Guide
  • Teach Hybrid
  • Teach OER
  • Teach Online
  • Teach Open Tools
  • VOCAT
  • Zoom Guide

Resources

  • Pedagogy
  • Technology
  • Research

Search

Topics

  • Active Learning
  • Anti-Oppressive Pedagogies
  • Assessment
  • Assignment Design
  • Blackboard
  • Blogs@Baruch
  • Classroom Practices
  • COIL
  • Copyright
  • Course Design
  • Discussion
  • Ed-Tech
  • Experiential Learning
  • Faculty
  • Game-based Learning
  • Group Work
  • Hybrid
  • Jumbo Courses
  • Media Literacy
  • OER
  • Online
  • Research
  • SoTL
  • Students
  • VOCAT
  • Zoom

Baruch College Center for Teaching and Learning · 151 E. 25th Street, Room 648 · 646-312-1565 · [email protected]


Creative Commons License
Unless otherwise specified, all CTL site content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.