S2E5 - Peer Review & Talk With Miranda Stahn (MSc)
References & Transcript
- Metaphorigins Instagram Page - https://www.instagram.com/metaphorigins/
- Scientific Literature - Wikipedia - https://en.wikipedia.org/wiki/Scientific_literature
- Elsevier - https://www.elsevier.com/reviewers/what-is-peer-review
- Dr. Deborah Sweet - CellPress - http://crosstalk.cell.com/blog/the-pros-and-cons-of-publishing-peer-reviews
- Reviewer Number 3 - Twitter - https://twitter.com/thirdreviewer
- Publishing Research Consortium - Survey - https://ils.unc.edu/courses/2015_fall/inls700_001/Readings/Ware2008-PRCPeerReview.pdf
- ExOrdo - https://www.exordo.com/blog/the-peer-review-process-single-versus-double-blind/
- Review on Peer Review - Scientific Article - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4975196/
- Ishaq ibn Ali al-Ruhawi - http://www.ishim.net/ishimj/5/03.pdf
- Philosophical Transactions of the Royal Society - https://royalsocietypublishing.org/journal/rstl
- Institute for Scientific Information (ISI) - https://www.isi-science.com/
- Journal Impact Factors - https://osu.libguides.com/c.php?g=110226&p=714742
- Why peer review fails - Article - https://www.vox.com/science-and-health/2016/11/23/13713324/why-peer-review-in-science-often-fails
- Mathematical Modelling of Peer Review - Scientific Article - https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0166387
- Dr. Richard Smith - BMJ - https://breast-cancer-research.biomedcentral.com/articles/10.1186/bcr2742
- How to Become a Peer Reviewer - https://authorservices.wiley.com/Reviewers/journal-reviewers/becoming-a-reviewer.html/index.html
- No compensation for peer review - https://www.editage.com/insights/is-paid-peer-review-a-good-idea
- Incentives for Peer Review - Scientific Article - https://journals.lww.com/ijsoncology/Fulltext/2018/02000/Peer_review_in_scholarly_publishing_part_A___why.1.aspx
- Peer Review Deadlines - https://www.editage.com/insights/peer-review-process-and-editorial-decision-making-at-journals
- Wiley - https://authorservices.wiley.com/Reviewers/journal-reviewers/what-is-peer-review/types-of-peer-review.html
- Dr. Eric Weinstein - The Portal - www.youtube.com/watch?v=U5sRYsMjiAQ
- Is Peer Review a Good Idea? - Scientific Article - https://academic.oup.com/bjps/advance-article/doi/10.1093/bjps/axz029/5526887
- Pre-Print Peer Reviews - https://www.aje.com/arc/benefits-of-preprints-for-researchers/
- Scientific Merit - https://wellcome.ac.uk/news/phd-merit-needs-be-defined-more-just-publications
- F1000Research - https://f1000research.com/
- PeerJ - https://peerj.com/
- Rubriq - https://www.force11.org/node/4672
- ASAPBio-HHMI meeting - Article - https://www.sciencemag.org/news/2018/02/researchers-debate-whether-journals-should-publish-signed-peer-reviews
- Miranda Stahn - Twitter - https://twitter.com/mi_RNAda
- Miranda Stahn - Instagram - https://www.instagram.com/alt_endings/
- Dr. Dominic Sauvageau - Bio - https://apps.ualberta.ca/directory/person/dsauvage#Overview
- Genome Alberta - https://genomealberta.ca/
- Science Writers and Communicators of Canada - https://sciencewriters.ca/
- Science Networkers - https://twitter.com/scinetworkers
- WISEST - https://www.ualberta.ca/services/wisest/index.html
- SOAR - https://twitter.com/NetworkingSoar
- Flying High by jantrax | https://soundcloud.com/jantr4x
- Music promoted by Switxwrhttps://www.free-stock-music.com
- Creative Commons Attribution 3.0 Unported License | https://creativecommons.org/licenses/by/3.0/deed.en_US
To my astounding family and friends. Near and far. Old and new. This is Kevin Mercurio on the mic. And welcome to the fifteenth episode of the Metaphorigins Podcast.
Just to reiterate the new things happening this season. One thing is the Metaphorgins merch, if you would like your own shirt and/or mug as shown on the Metaphorigins instagram page, just shoot me a message on my website or instagram account. Any profits I make each month will go towards a local charity (ex. SPCA, Ottawa Regional Cancer Foundation, and other suggestions). Another “feature” I will have this season is requests. Starting on the next episode, some will be dedicated to any expressions listeners of this podcast would like me to cover, including the creative intro story. For a request, please shoot me a message on again, my website or instagram.
Now, to show support if you like this sort of content, please make sure to rate and subscribe to the podcast on Apple or whatever platform you are listening to this on, and follow @metaphorigins on Instagram, where I will be posting most of my updates, as well as on my personal website: kjbmercurio.com/metaphorigins. Regular listeners of this podcast will know that I hold a draw every 5 episodes (for now anyway). And so I will be giving away yet another awesome Metaphorigins shirt to one lucky listener, right now! I have done the draw for the butterfly-printed Metaphorigins shirt, and the winner is Andrew Kam. Wooooow! Congratulations! I’ll shoot you a message following this episode.
Okay. In today’s episode I’ll be stepping back from metaphorical language and talk about something that people are beginning to get familiar with due to the current state of the world. I touched on it briefly on my 10th episode about Scientific Literature, and because it is quite a significant subject, I believe it requires an episode all on its own. And that topic is peer review.
Oh Peer Review. How many times have you heard that concept uttered in your career? I guess that question is mainly to my fellow scientists. The importance of peer review. How peer review has revolutionized the dissemination of knowledge. Hey, my paper has been accepted after passing peer review, no revisions required! If only that were the case, eh?
But to others in different professions… Does peer review conjure feelings of trust? Does peer reviewed work mean factual knowledge to you? I mean, why would it not? Theories criticized and challenged by experts in the field to fine tune hypotheses and experiments, cutting down towards the ultimate truth of nature and reality. Sounds good to me.
Now, if we remember from my previous episode, Scientific Literature is what academics, mainstream media and the general public consider as fact. Well these scholarly publications undergo academic publishing, which requires a systematic review of the work, or what we call peer review. Elsevier, one of the leading giants of academic publishing, says that "the peer review system exists to validate academic work, helps improve the quality of published research, and increases networking possibilities within research communities” (https://www.elsevier.com/reviewers/what-is-peer-review). Hmm, is that true?
Let’s dissect that statement a little bit, as there’s a lot to unpack there. The system they are speaking of, paraphrased in my previous episode, consists of an 11+ step process from author submission of a manuscript (or the unformatted form of their work) to acceptance. Yes, there can be and generally are more steps to this, as often authors are asked to perform various revisions regarding more experiments, consideration of other hypotheses, clarity in data interpretations, etc. etc. This system validates academic work, in a way that journals can be satisfied with the papers they decide to publish, since the work has been checked by experts in the field, and thus improving the quality of published research as a whole. So those parts seem valid. However, increasing networking opportunities would be difficult, seeing as authors normally don’t know which experts are reviewing them in the most common form of peer review (more on types of peer review later). Perhaps reviewers can be aware of other experts in their field, and establish communication with them for potential collaboration? More later on how this can actually be a problem. In addition, I wonder how much that actually happens.
Peer review is one of those concepts that sound great in theory. A system made by researchers, and put in place relatively recently with the right intentions. Though, I think a problem scientists have is that we don’t truly think about the logistics of the peer review system. It’s not as simple and clear cut as many researchers think, especially for young up-and-coming scientists, too busy trying to crank out paper after paper, staying up late to finish experiments and analyze the observations jotted quickly in lab notebooks. In a blog post in CellPress, Vice President of Editorial Dr. Deborah Sweet mentions that “Younger scientists in particular often talk about the peer review process as being a mysterious -black box- that they simply don’t understand” (http://crosstalk.cell.com/blog/the-pros-and-cons-of-publishing-peer-reviews).
This is very, very true (and yes I know there are no degrees of truth). The one thing I am sure younger researchers are aware of is the meme of douchebag Reviewer #2, happily highlighting 80% of the paper in red as if drunk off some sort of scholarly reviewer brewed cold pressed coffee. There’s even a Twitter account dedicated to such harsh comments, though the account is unfortunately named Review Number 3 (https://twitter.com/thirdreviewer). Like c’mon editor at Journal X, why can’t you start with the bad news and finish off with the good, critical-but-fair review?
The point here is not to question whether the implementation of peer review should exist or not. In a 2008 large scale international survey conducted by the Peer Review Consortium, out of 3040 academics, 85% agreed that peer review greatly helps scientific communication and 83% believed that without peer review there would be no control (assumingely of the quality of work being published). Still a majority, but this statistic drops down to 64% of academics when asked if they are satisfied with the current system of peer review used by journals, and drops even further to 32% when asked if they believe that this [current] system is the best possible (https://ils.unc.edu/courses/2015_fall/inls700_001/Readings/Ware2008-PRCPeerReview.pdf). And that’s the thing. Are academics, especially those just beginning to establish their careers, even aware that there are different journals have different types of peer review systems? Are academics aware of the flaws and biases inherent within the most commonly used peer review systems? Are academics in tune with the pros and cons of keeping peer review as it is, or know of the movement for abolishing it entirely for a completely new way of disseminating scientific work? What about new Journals and publishing companies trying to revolutionize this concept and compete against giants who are currently dominating the business, yes business, of publishing science?
That’s what this episode will attempt to elaborate on. I will start with the history of peer review, introduce the different types of peer review systems and the various movements in changing the current general system. You might be surprised to know just how complex this simple concept can get. My goal will be to highlight the gap in the knowledge researchers have in what the peer review process actually is, and for the public to better understand how interpretations of scientific work become “good enough to be fact”.
Peer review, best described by Brian Campbell in his blog on ExOrdo.com, "is like democracy, the saying goes, despite its flaws, it’s the best system we have” (https://www.exordo.com/blog/the-peer-review-process-single-versus-double-blind/). So let’s understand it.
Most of this information was obtained from many articles discussing the advantages and disadvantages of peer review systems, from fellow researchers to those in executive positions of academic publishing organizations. All sources will be mentioned in the description.
Let’s start with the interesting fact that peer reviewing scholarly work prior to dissemination to the masses was not always the case, and in fact its initial purpose was not for the notion of improving the quality of the work, nor to ensure the validity of the work in question. In a 2014 review, of peer review funny enough, from the Journal of International Federation of Clinical Chemistry and Laboratory Medicine, Kelly and colleagues state “the peer review process was first described by a Syrian physician named Ishaq bin Ali al-Rahwi who lived from year 854-931, in his book Ethics of the Physician” (https://www-ncbi-nlm-nih-gov.proxy.bib.uottawa.ca/pmc/articles/PMC4975196/). Paraphrasing the description, physicians were required to take notes describing the state of their patients’ medical conditions upon each visit, and notes were scrutinized by local medical professionals as to whether the physician met “the standard of medical care”. Interesting since, you can imagine, that falsifying data would have been incentivized. Yet, it wasn’t until after the invention of the printing press, and the formation of journals and societies to publish scientific work in some form of a catalogue, did the Philosophical Transactions of the Royal Society formalize the peer review process. This process was introduced to “help editors decide which manuscripts to publish in their journals”, likely as a method of choosing work that would boost readership. Peer review developed as scientific research became more abundant, fined tuned by prestigious academic societies in the 18th century to finally validate the findings in manuscripts by experts, to the system we have in modern day.
Peer review has become a standard for determining the credibility of work, but also the credibility of publishers that disseminate said work. Theories and interpretations of experimental data are not accepted unless it has been published in a peer-reviewed journal. In fact, according to Kelly and colleagues, “The institute for Scientific Information (ISI) only considers journals that are peer-reviewed as candidates to receive impact factors”. Defined by Thomson Scientific, “the Journal Impact Factor is the average number of times articles from the journal published in the past two years have been cited” (https://osu.libguides.com/c.php?g=110226&p=714742), a metric commonly used to determine which journals churn out the most defined and original work. This doesn’t always hold true, but Scientific Journals and their massive Publishing Organizations are another big topic that deserve their own episode in the future.
Thus, and do correct me if I’m wrong, but the way in which peer review is currently run has not been heavily debated until fairly recently. Perhaps it was long regarded as a crappy but necessary process in academia. Hell, the topic has surprisingly been picked up by mainstream news sites like Vox, who in 2016 published an article titled “This new study may explain why peer review in science often fails” (https://www.vox.com/science-and-health/2016/11/23/13713324/why-peer-review-in-science-often-fails). In it, they reference a study published in PLOS ONE in which researchers using mathematical modelling showed how there is “a small minority of researchers shouldering most of the burden of peer review”, at least those that were indexed in MEDLINE for biomedical research (https://journals-plos-org.proxy.bib.uottawa.ca/plosone/article?id=10.1371/journal.pone.0166387). This striking finding calls to question why peer review is not more uniformly distributed. It also calls to question whether peer review actually works to achieve what it currently is set out to achieve, pointed out by the former editor of the journal BMJ Dr. Richard Smith saying “We have little or no evidence that peer review works, but we have lots of evidence of its downside.”
Let’s not get too ahead of ourselves and continue with the basics. How does one even participate in peer review? Wiley, another publishing giant, states steps in which one can be invited to review a manuscript. As “there is no one way to become a reviewer, but there are some common routes, these include:
- Asking a colleague who already reviews for a journal to recommend you
- Networking with editors at professional conferences
- Becoming a member of a learned society and then networking with other members in your area
- Contacting journals directly to inquire if they are seeking new reviewers
- Seeking mentorship from senior colleagues
- Working for senior researchers who may then delegate peer review duties to you”
Or in another words, networking, networking, networking. This aligns with Elsevier’s quote earlier. There is no application process, and it seems rare for journal editors to outright contact you for your expertise based on the work you do. Reviewers essentially want to review.
So what makes researchers review their peers’ work? Perhaps its the compensation they get for evaluating the work of their colleagues? Maybe its the recognition for their contributions of validating work done in their field? Or it could be that peer review is just part of being an academic, and therefore if you want to be part of this community, it just happens to be a task you are expected to perform? Unfortunately, only one of those is correct, and that opinion might even be dependent on who you ask. Researchers doing peer review are not paid extra for the time they invest in evaluating the colleagues’ work, nor are they even recognized for the feedback (good or bad) that they provide to manuscript authors. No, and I’ve tried to find another possible answer, but it seems that peer review can be juxtaposed to the academic’s pro bono case.
With this in mind, you can imagine that motivation for reviewing work, work that the majority of people will consider as fact, can be rather minimal. I mean, researchers need to focus on their own work, running their own labs, keeping up to date with their employees, apply for government or institutional funding and the like. Peer review would certainly be at the bottom of their list of priorities, if a priority at all. Are there hidden incentives that drive encouragement to peer review? In a 2018 article published in the International Journal of Surgery Oncology, Koshy and colleagues state that there are several incentives to participating in peer review. The majority of these are exactly as mentioned before, to benefit your discipline, to structure sound government policies, and what I think is the best “Working as a peer reviewer… carries a level of prestige that can be used in one’s credentials”. The problem is that others don’t know whether your reviews are good or bad, not in terms of the overall impression of the work, but in terms of the content of your feedback. The only incentives that I can see on their list are 1) Awareness of the field, 2) training (for critical analysis of research), and potentially 3) Peer reviewer recognition and bonuses. The third, obviously sounding great, must be uncommon as its generally known that reviewers are not compensated or recognized for the reviews they conduct. Also, reviewers do have deadlines to submit their reviews back to journal editors. Therefore, a voluntary task with deadlines and no immediate incentives to ensure a decent amount of effort put into evaluating peer work seems like a system that needs to be adjusted.
Let’s go into what I believe are the 5 main types of peer review. I mentioned this back in my previous episode but I skipped the less common ones. We’ll start with the most used form, single-blinded peer review, in which reviewers are aware of the authors’ identities but not vice versa. A double-blinded peer review would be the case that authors and reviewers’ identities are not known to one another. And you bet your butt that there’s also triple-blinded peer review as well, in which not only do the authors and reviewers not know each other, but neither does the editor. Now we’re just missing the quadruple and quintuple blinded peer review, in which the computers and the quantum particles also don’t know who’s who, types in which I’ve coined myself as dark peer review (when you don’t understand something, just put dark in front of the closest thing it resembles!). Okay, there’s also of course open peer review, in which everyone knows everyone; post-publication peer review in which manuscripts are published (after being screened for their fit towards the specific journal) and then reviewed by the scientific community at large; and lastly pre-print peer reviews, an interesting topic in its own right, where publications that progress to single/double/triple-blinded peer review can be uploaded onto a preprint server and then also reviewed by the scientific community at large. Outside of these 5, there is also collaborative peer review, where reviewers can work together to produce one peer review report, and interactive peer review, in which reviewers provide feedback to the authors as they review the work, leading to suggestions and corrections happening in real-time, and concluding with a final version of the manuscript that the editor can publish (https://www.elsevier.com/reviewers/what-is-peer-review, https://journals.lww.com/ijsoncology/Fulltext/2018/02000/Peer_review_in_scholarly_publishing_part_A___why.1.aspx). All forms of peer review can take a month or multiple months to complete.
Again, single-blinded peer reviews are the most common type. The anonymity for the reviewers has been stated as very important, as it “allows the reviewer to be honest without fear of criticism from an author”. This is typically important for younger researchers who want to establish a name for themselves in their fields, and don’t want any blowback from scientific juggernauts with lots of influence. You can easily understand already why that is a problem. Why should fellow researchers have to think twice regarding their opinions about someone else’s work? That doesn’t illustrate the friendly scientific community the public thinks is the case, right? Is the scientific community open and friendly? The best possible answer I can probably say based on my own experience as a male from a first-world country, yes at an individual level, no at an institutional level. The publishing giant Wiley even acknowledges this in its own critique of the single-blinded peer review system, in which “Knowledge of the author may overshadow the quality of the work - potentially leading to a lack of scrutiny” and “There is the potential for discrimination based on gender or nationality.” In addition, single-blinded peer review might incentivize reviewers to stall, especially if the work in question is similar to their own work. Honestly, wouldn’t you want your work to be remembered versus some competing lab? Should scientists be even competing with each other? That is literally the case right now, google “being scooped in academia” for more on this.
What about double-blinded peer review? Surely this would be a better system. In fact, the 2008 large scale international survey done by the PRC highlighted that 56% of respondents preferred this type of system. Both Wiley and its competitor Elsevier state that this type should reduce bias towards authors. However, its often noted that despite the anonymity, reviewers can often discover the identity of the authors (or at least which lab the work came from) based on the writing and self citation, done frequently as work is often branched off of other work done in the same lab. This defeats the purpose of double-blinded peer review and the extra administrative work required to do so. Other have argued that “knowledge of the author's identity helps the reviewer come to a more informed judgement - and that without this the review suffers”. This potentially has merit, since if double-blinded peer review is done properly, work done previously by the authors may be hidden from reviewers, work that may help reviewers better understand where the field stands.
I won’t go too much into triple-blinded peer review as it is quite rare. It is argued that this may in fact be the only way to reduce as much bias as possible. But it seems like overkill. The complexity of conducting a thorough review of this kind would be slow and counterproductive. As well, this doesn’t solve previous problems seen in double-blinded peer reviews with the loss of author anonymity.
We arrive at the more recently developed types of systems. Open peer review is one of the newest but picking up speed in the academic publishing community. In fact, a minority of journals may also publish these reviews along with the paper, ensuring that all parts of the process are transparent. These systems, in addition to transparency, encourages well thought out feedback, possibly increasing the quality of the work in question. Despite these benefits, reviewers may hold back negative comments in fear of reprisals from the authors, particularly if there is an imbalance of power or influence. Reviewers are also known to refuse open peer review requests altogether if their identity has to be known, for this same reason. Obviously, this should not be a problem in the scientific community, but it is. Otherwise this would certainly be the best way to ensure honesty and full cooperation in disseminating high quality research.
Perhaps post-publication is the way to go. In an episode of his Podcast, the Portal, outspoken physicist and managing director of the investment firm Thiel Capital Dr. Eric Weinstein calls peer review “a cancer from outer space. It came from the biomedical community, it invaded science… People who are now professional scientists have an idea that peer review has always been in our literature, and it absolutely, motherfucking has not” (www.youtube.com/watch?v=U5sRYsMjiAQ). This great quote might make him seem like a peer review abolisher, however what he desires is the post-publication system, where authors pass the traditional “peer injunction” and progress into the real peer review, once it has been published in a journal after editor read through and scrutiny. Thus true peer review can occur since the entire community is open to providing their feedback. Now, there is obvious problems with this system as well. This is a lot of power for one or a select number of associate editors for a journal to have. Perhaps we imagine that this systems, if accepted by the highest level journals, would impact the number of papers published either by increasing or decreasing this number. Increasing would certainly mean more low quality work would be disseminated and perhaps misinterpreted by mainstream media, the general public or even other researchers. Decreasing would mean that a lot of researchers, often graduate students or post docs, would have even more trouble adding accepted publications to their resume, impacting their career opportunities and government funding, since the current standard for scientific success is the amount of scientific literature you produce. In another review published this year in the British Journal of Philosophical science, authors Heesen and Bright call for the embrace of post-publication peer review as well, stating that perhaps journals could still publish relevant work and have their exclusive editions contain only the work that passes the scrutiny of the scientific community at large (https://academic.oup.com/bjps/advance-article/doi/10.1093/bjps/axz029/5526887).
It’s a conundrum for sure. Pre-print peer reviews are another way of combining this method with the general system of blinded reviews already in place, but it doesn’t answer the overall question of at what point should scientific work be shared with the community. Should we keep the current system and be aware of the biases and flaws inherent in it? We already know that biases towards nationalities and sexes occur, and that the systems has slow turnover and high effort dedication. Should all work be shared and thus have more work retracted due to post-pub open review? We already know that, excluding the potential for low quality work to slip through the cracks and be misinterpreted by people, that researchers require publications as a metric to succeed in academia. It’s complicated right?!
The fact of the matter is, we need expert opinions to weigh in on the conclusions derived from scientific research, and how those conclusions were formed. Perhaps the current system as it stands now is flawed but is the best system that the scientific community has. In my opinion, and I don’t think I can get fully into it in this episode, is that there lies two major problems. The first is the reluctance of researchers to be fully honest in their reviews if their identities become known, fearing backlash from authors and perhaps even other researchers who disagree with their opinions. This, despite the system, should never occur, in order for the human race to progress our knowledge of how the universe works effectively. The second problem, is the concept that scientific achievement is measured based on the number of your publications that pass peer review. Now wait, I understand that this makes a lot of sense, as it means that you are producing good quality work in an efficient manner, so obviously you’re doing something right. But if peer review, in its current form, can be biased towards race, sex, even previous work, and requires a lot of voluntary time from experts to properly review the material, should this be the main factor of government funding for your lab? Should experts even conduct thorough reviews if considering publishing others work should prioritize publishing their own work? These are all fair questions that none of these systems have an answer for.
I didn’t have much time to go through new journals and technologies trying to revolutionize the publishing space, like F1000Research, PeerJ and Rubriq, though links to learn more about them will be in the description (https://www-ncbi-nlm-nih-gov.proxy.bib.uottawa.ca/pmc/articles/PMC4975196/). The important takeaway here is to be aware of what peer review actually means. These various validation systems are processes that scientific literature goes through prior to being published. And perhaps by discussing it in detail through science communication means, like the annual ASAPBio/HHMI meeting for academic publishers, or even in this small podcast, we can create a peer review system that incentivizes thorough, honest evaluation and reduce as much bias as possible. This would not only demonstrate the inclusivity of what science strives to be, but leads our collective knowledge that much closer to the universal truths governing us all.
For this episode, I would like to try something a bit different. What I want is for this podcast to slowly grow into a platform for academics to come on and give their opinion about communicating science, whether you’re an established researcher or graduate student. In particular, I think trainees are usually hidden from the public eye, yet they do a substantial amount of work to get scientific work available to other researchers and the public via publishing means. With that said, I will be doing my first interview today with someone who I believe is doing great work in the SciComm community.
She is a recent graduate from the University of Alberta, finished her Masters of Science in Chemical Engineering with a focus on bacterial viruses. Before this, she completed her Bachelor’s of Science in Cell Biology. She now works for Genome Alberta, but she briefly worked in recruitment for AbCellera and has spent the past few years working and volunteering in the realm of science communications and outreach. Presently, she writes for the Science Writers and Communicators of Canada Blog and sits as the Steering Committee Chair for Science Slam Canada. She has been a mentor and long-time volunteer for WISEST and recently founded the female-focused networking group SOAR. Please welcome the wonderfully talented, Miranda Stahn.
Thanks for listening to this special mid-season episode of… Metaphorigins. Remember to rate and subscribe for more episodes and to follow the podcast on Instagram for updates on the next draw coming on the 20th episode. But until then, stay skeptical but curious.