Absolutism has no bearing on the scientific process. Graphic by Miguel Tovar/University of Houston

Science, like politics, can elicit polarizing opinions. But with an ever-expanding body of knowledge — and the especially dizzying flurry of findings during the pandemic — is it fair to say that views on science are becoming more extreme?

Measuring the polarization

“A standard way of measuring polarization in the U.S. is asking Democrats and Republicans how warmly they feel toward members of their own group and members of their outgroup on a feeling thermometer from 0 to 100,” said Jessica Gottlieb, professor at the UH Hobby School of Public Affairs. “The difference in ingroup-outgroup warmth is then considered a measure of polarization. This has been measured by the American National Elections Studies systematically over the past several decades, and indeed the level of affective polarization has been increasing in the U.S.”

“Absolutism is the culprit.”

In an article in Foreign Affairs entitled, “How Extremism Went Mainstream,” the author notes that “the tools that authorities use to combat extremists become less useful when the line between the fringe and the center starts to blur.”

Science has traditionally been one such tool. However, this extremism — where everything is black and white — in politics, has made its unfortunate way into academia. John Lienhard is a professor at the University of Houston and host of “Engines of Our Ingenuity,” a national radio program which has been telling stories of how creativity has shaped our culture since 1988. According to Lienhard, extremism — as seen within the scientific enterprise — goes by a different name.

“Absolutism is the culprit – the need on the part of so many of us to know The Right Answer. The absolutists in the world will glom onto whatever vehicle suits them – religion, politics, education, and ultimately, science itself,” said Lienhard. In other words, good scientists amend and revise, while “the absolutist finds the honest practice of science hateful,” he says, “because science is a way of life where everything lies open to question.”

A series of approximations

In an article entitled, “If You Say Science Is Right You’re Wrong,” professor Naomi Oreskes introduces this quote by Nobel Prize–winning physicist Steven Weinberg:

“Even though a scientific theory is in a sense a social consensus, it is unlike any other sort of consensus in that it is culture-free and permanent.”

Well, no. Even a modest familiarity with the history of science offers many examples of matters that scientists thought they had resolved, only to discover that they needed to be reconsidered.

Some familiar examples are Earth as the center of the universe, the absolute nature of time and space, the stability of continents and the cause of infectious disease.

Absolutism in science is dangerous. Good scientists know how important it is to ask probing questions. In his book entitled, Science versus Absolutism: Science Approaches Truth by a Series of Approximations, the chemist T. Swann Harding asks the question: “What are scientific laws?” He goes on to answer:

“Most people appear to regard them as singularly exact and unalterable things … to violate them brings swift retribution. They are unchanging and eternal in character. Yet the so-called laws of science are really rules pieced together by man on a basis of much observation and experiment.”

In the past, so much of science was just plain wrong – until another researcher came around and amended the original belief (think Galileo). How are our modern times any different? There are still many situations where scientific thought has needed to be amended. Even as recently as the COVID crisis, researchers were revising their thoughts about the spread and contagiousness of the disease.

Allowing for dissent

In a Scientific American blog, Matt Nolan writes that “Dissent in Science Is Essential–up to a Point.” In it, he said, “It is the public who pay the price when marginalized science informs policy. History reminds us this is unsafe territory.” However, Lienhard adds that Einstein set limits on the validity of Newton’s laws just as nuclear fission provided an amendment to the conservation of energy law. There is always a new question to formalize where experimentation is being conducted.

Referred to as the “file drawer effect,” another predicament occurs when a researcher does not get the answer they were expecting, and therefore, decides to not publish the negative findings. Every answer is meaningful. And sometimes a negative answer — or no answer — is an answer.

Dissent, and perhaps a certain measure of disappointment, is a critical part of scientific inquiry.

The Big Idea

Science can be thought of as the best we know to the degree we understand a given problem at a given place and time. Absolutism has no bearing on the scientific process and in some cases actively obscures and colors that understanding. And that’s not black and white at all; that’s about as gray as it gets.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

If there are fewer grant proposals, does that mean innovation has slowed? UH gets to the bottom of the question. Graphic by Miguel Tovar/University of Houston

University of Houston: What a drop in NSF proposals means for the country's rate of innovation

houston voices

A 17 percent drop in proposals over the past decade to the National Science Foundation may be a mixed blessing.

A consistently rising budget – and this is in billions of dollars – is the preferred method of keeping the number of funded proposals ever higher. But a dip in the number of proposals submitted in the first place can have a similar effect of increasing the number of funded proposals, since the pool of submissions is much smaller.

In an article for Science Magazine, author Jeffrey Mervis poses the question: Has there been a decline in grant-worthy ideas? In NSF’s biology sector, Mervis notes that “demand has tumbled by 50 percent over the decade and the chances of winning a grant have doubled, from 18 percent in 2011 to 36 percent in 2020.” NSF’s leadership suggests two possible reasons for this phenomenon.

Eliminating fixed deadlines

“Dear Colleague” letters went out to numerous directorates within the NSF notifying PIs that fixed deadlines for small projects ($500,000 and less) would be taken out of the equation. For instance, the Directorate for Computer and Information Science and Engineering’s letter read: “in order to allow principal investigators (PIs) more flexibility and to better facilitate interdisciplinary research across disciplines” deadlines would be eliminated. The letter goes on to state that by eliminating fixed deadlines, PIs will be free to think more creatively and collaboratively – without the added stress of a deadline.

Wouldn’t less stress mean more applications? This doesn’t seem to be the case. In one instance, according to another article in Science, proposals dropped when the program ceased annual deadlines and replaced them with rolling deadlines.

Reducing stress for grant reviewers

That article goes on to say that these changes alleviate the strain on the grant reviewers without lowering standards. James Olds, assistant director of the Directorate for Biological Sciences, anticipated that the NSF program managers would get somewhat of a break, and that the new policy would relieve university administrators who process the applications from being overwhelmed.

Other factors at play

“It is highly unlikely there was one specific reason for the decrease,” said David Schultz, assistant vice president for Sponsored Projects in the Office of Contracts and Grants at the University of Houston, “but rather multiple factors contributing over time. One potential cause is that many major research institutions are diversifying their funding sources away from NSF and into other federal agencies more aligned with their strategic areas of research interest, such as NIH, DOD, and DOE. The NIH has seen an 11 percent increase in proposals over the same period, from 49,592 in 2011 to 55,038 in 2020.”

Tenure

“Another component is the documented decrease in the number of tenured faculty across the nation. Generally tenured faculty are more research-focused, as their ability to obtain externally funded research is a major criterion for promotion and tenure,” said Schultz. “While this may lead to fewer proposals, it does encourage new tenure track faculty to focus more efforts on the higher likelihood of being awarded an NSF grant.”

The Big Idea

Some people work better and more efficiently when presented with a deadline. Could that be the reason fewer proposals are being turned in? In his article, Mervis, deliberates over whether the number of proposals means that the nation is innovating more slowly than before. But how could that be?

The National Science Board, NSF’s presidentially appointed oversight committee, is trying to get to the bottom of the issue so as to mitigate it. Olds stands by the decision to remove deadlines, pointing out that it should be the strength of the proposal not the threat of a deadline which motivates the research project.

Schultz sees a silver lining. “With fewer proposals being submitted to the NSF, the shift creates an opportunity for smaller, emerging universities to increase their proposal submission and success rates.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Every situation is unique and deserves a one-of-the-kind data management plan, not a one-size-fits-all solution. Graphic by Miguel Tovar/University of Houston

Houston research: Why you need a data management plan

Houston voices

Why do you need a data management plan? It mitigates error, increases research integrity and allows your research to be replicated – despite the “replication crisis” that the research enterprise has been wrestling with for some time.

Error

There are many horror stories of researchers losing their data. You can just plain lose your laptop or an external hard drive. Sometimes they are confiscated if you are traveling to another country — and you may not get them back. Some errors are more nuanced. For instance, a COVID-19 repository of contact-traced individuals was missing 16,000 results because Excel can’t exceed 1 million lines per spreadsheet.

Do you think a hard drive is the best repository? Keep in mind that 20 percent of hard drives fail within the first four years. Some researchers merely email their data back and forth and feel like it is “secure” in their inbox.

The human and machine error margins are wide. Continually backing up your results, while good practice, can’t ensure that you won’t lose invaluable research material.

Repositories

According to Reid Boehm, Ph.D., Research Data Management Librarian at the University of Houston Libraries, your best bet is to utilize research data repositories. “The systems and the administrators are focused on file integrity and preservation actions to mitigate loss and they often employ specific metadata fields and documentation with the content,” Boehm says of the repositories. “They usually provide a digital object identifier or other unique ID for a persistent record and access point to these data. It’s just so much less time and worry.”

Integrity

Losing data or being hacked can challenge data integrity. Data breaches do not only compromise research integrity, they can also be extremely expensive! According to Security Intelligence, the global average cost of a data breach in a 2019 study was $3.92 million. That is a 1.5 percent increase from the previous year’s study.

Sample size — how large or small a study was — is another example of how data integrity can affect a study. Retraction Watch removes approximately 1,500 articles annually from prestigious journals for “sloppy science.” One of the main reasons the papers end up being retracted is that the sample size was too small to be a representative group.

Replication

Another metric for measuring data integrity is whether or not the experiment can be replicated. The ability to recreate an experiment is paramount to the scientific enterprise. In a Nature article entitled, 1,500 scientists lift the lid on reproducibility, “73 percent said that they think that at least half of the papers can be trusted, with physicists and chemists generally showing the most confidence.”

However, according to Kelsey Piper at Vox, “an attempt to replicate studies from top journals Nature and Science found that 13 of the 21 results looked at could be reproduced.”

That's so meta

The archivist Jason Scott said, “Metadata is a love note to the future.” Learning how to keep data about data is a critical part of reproducing an experiment.

“While this will be always be determined by a combination of project specifics and disciplinary considerations, descriptive metadata should include as much information about the process as possible,” said Boehm. Details of workflows, any standard operating procedures and parameters of measurement, clear definitions of variables, code and software specifications and versions, and many other signifiers ensure the data will be of use to colleagues in the future.

In other words, making data accessible, useable and reproducible is of the utmost importance. You make reproducing experiments that much easier if you are doing a good job of capturing metadata in a consistent way.

The Big Idea

A data management plan includes storage, curation, archiving and dissemination of research data. Your university’s digital librarian is an invaluable resource. They can answer other tricky questions as well: such as, who does data belong to? And, when a post-doctoral student in your lab leaves the institution, can s/he take their data with them? Every situation is unique and deserves a one-of-the-kind data management plan, not a one-size-fits-all solution.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Let's talk about dark data — what it means and how to navigate it. Graphic by Miguel Tovar/University of Houston

Houston expert: Navigating dark data within research and innovation

houston voices

Is it necessary to share ALL your data? Is transparency a good thing or does it make researchers “vulnerable,” as author Nathan Schneider suggests in the Chronicle of Higher Education article, “Why Researchers Shouldn’t Share All Their Data.”

Dark Data Defined

Dark data is defined as the universe of information an organization collects, processes and stores – oftentimes for compliance reasons. Dark data never makes it to the official publication part of the project. According to the Gartner Glossary, “storing and securing data typically incurs more expense (and sometimes greater risk) than value.”

This topic is reminiscent of the file drawer effect, a phenomenon which reflects the influence of the results of a study on whether or not the study is published. Negative results can be just as important as hypotheses that are proven.

Publication bias and the need to only publish positive research that supports the PI’s hypothesis, it can be argued, is not good science. According to an article in the Indian Journal of Anaesthesia, authors Priscilla Joys Nagarajan, et al., wrote: “It is speculated that every significant result in the published world has 19 non-significant counterparts in file drawers.” That’s one definition of dark data.

Total Transparency

But what to do with all your excess information that did not make it to publication, most likely because of various constraints? Should everything, meaning every little tidbit, be readily available to the research community?

Schneider doesn’t think it should be. In his article, he writes that he hides some findings in a paper notebook or behind a password, and he keeps interviews and transcripts offline altogether to protect his sources.

Open-source

Open-source software communities tend to regard total transparency as inherently good. What are the advantages of total transparency? You may make connections between projects that you wouldn’t have otherwise. You can easily reproduce a peer’s experiment. You can even become more meticulous in your note-taking and experimental methods since you know it’s not private information. Similarly, journalists will recognize this thought pattern as the recent, popular call to engage in “open journalism.” Essentially, an author’s entire writing and editing process can be recorded, step by step.

TMI

This trend has led researchers to open-source programs like Jupyter and GitHub. Open-source programs detail every change that occurs along a project’s timeline. Is unorganized, excessive amounts of unpublishable data really what transparency means? Or does it confuse those looking for meaningful research that is meticulously curated?

The Big Idea

And what about the “vulnerability” claim? Sharing every edit and every new direction taken opens a scientist up to scoffers and harassment, even. Dark data in industry even involves publishing salaries, which can feel unfair to underrepresented, marginalized populations.

In Model View Culture, Ellen Marie Dash wrote: “Let’s give safety and consent the absolute highest priority, with openness and transparency prioritized explicitly below those. This means digging deep, properly articulating in detail what problems you are trying to solve with openness and transparency, and handling them individually or in smaller groups.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

The University of Houston has tips for doing your due diligence when it comes to avoiding unintentional plagiarism. Graphic by Miguel Tovar/University of Houston

Houston expert: How to avoid unintentional plagiarism in your research work

houston voices

Plagiarism is the use of someone else’s words, ideas, or visuals as if they were your original work. Unintentional plagiarism is plagiarism that results from the disregard for proper scholarly procedures. It’s much easier to commit than one would think, and it has toppled giants in the research enterprise.

From 2007-2020, the National Science Foundation made 200 research misconduct findings, of which 78 percent were related to plagiarism. Here are some do’s and don’ts that will help you avoid unintended plagiarism, a potentially career-killing misstep.

The dos and don'ts

Don’t paraphrase without citing

According to a study of 63,700 students, Rutgers University Business School found that 36% of undergraduates admit to “paraphrasing/copying few sentences from Internet source without footnoting it.”

Don’t forget to add the quotation marks

And don’t forget to properly cite your sources at the end of the paper even if you used any in-text or footnote citations to give proper credit to the primary author.

Don’t copy and paste placeholders

You mean to go back and rewrite it in your own words but are liable to forget or run out of time. (More on this later.) If you copy and paste from a previously published paper of your own, it’s not research misconduct, but it is considered bad practice if you don’t cite it. This is called self-plagiarism.

Do make sure your hypothesis or subject is your own

Plagiarism of ideas occurs when a researcher appropriates an idea, such as a theory or conclusion — whole or in part — without giving credit to its originator. Acknowledge all sources!

Peer review is supposed to be confidential, and colleagues put their trust in each other during this process, assuming there will be no theft of ideas. Once the paper is published in a peer-reviewed journal, it should be cited.

Do use direct quotes

But quoted material should not make up more than 10 percent of the entire article.

Failure to use your own “voice” or “tone” is also considered plagiarism, or could be construed as plagiarizing, depending on how unique the author’s voice is. When there is an excessively unique turn of phrase, use quotation marks and cite (if in doubt.)

When paraphrasing, the syntax should be different enough to be considered your own words. This is tricky because you need to understand the primary work in its original language in order to reword it without just moving words around. In other words, no shuffling words!

Do cite facts widely acknowledged to be true (just in case!)

If it’s something that is generally held within your discipline to be true, or it’s a fact that can be easily looked up – like the year a state passed a certain law – there’s no need to cite “Google” or any generic platform, but it’s better to be safe than sorry. Someone reading your work might not have a background in your discipline.

Do run your paper through a plagiarism-detecting tool

Some options are www.turnitin.com or http://www.ithenticate.com.

Sanctions

There are consequences for plagiarizing another’s work. If you’re a faculty member, the sanctions could affect your career. For instance, according to retractionwatch.com, a prominent researcher and university leader was recently found to have engaged in misconduct. Terry Magnuson was accused, and later admitted to, plagiarizing unintentionally.

In an open letter to his university colleagues, Magnuson wrote a startlingly candid statement: “You cannot write a grant spending 30 minutes writing and then shifting to deal with the daily crises and responsibilities of a senior leadership position in the university, only to get back to the grant when you find another 30 minutes free.”

He goes on to say: “I made a mistake in the course of fleshing out some technical details of the proposed methodology. I used pieces of text from two equipment vendor websites and a publicly available online article. I inserted them into my document as placeholders with the intention of reworking the two areas where the techniques —which are routine work in our lab — were discussed. While switching between tasks and coming back to the proposal, I lost track of my editing and failed to rework the text or cite the sources.” Taking responsibility for this oversight, he resigned.

And that brings us to the Big Idea…

The Big Idea

The one thing that trips up even the most seasoned writers is having enough time to properly cite all one’s sources. Give yourself a few extra days (weeks?) to finish your paper and have a peer read it over with any questionable facts or quotes that might need to be cited more appropriately.

Funding agencies take plagiarism very seriously. For instance, the NSF provides prevention strategies by implementing a pre-submission process, and is also attempting to make plagiarism detection software available.

You also may want to take advantage of resources in your university’s library or writing center. There are also several tools to help you organize your citations; one called RefWorks will keep track of your sources as you write in-text citations or footnotes.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research. It's based on a workshop given by Penny Maher and Laura Gutierrez at the University of Houston; Senior Research Compliance Specialists at the University of Houston.

There are a few things to remember about altmetrics when tapping into non-traditional methods of metrics reporting. Graphic by Miguel Tovar/University of Houston

University of Houston: How to navigate 'altmetrics' in your innovative research project

Houston voices

Alternative metrics, or “altmetrics,” refers to the use of non-traditional methods for judging a researcher’s reach and impact.

Being published in a peer-reviewed journal is surely a great feat. It’s the typical way professors get their research out there. But the tools established to measure this output might end up giving the skewed impression about an author’s impact in spheres both academic and social.

Traditional metrics

Web of Science and Scopus are the main platforms that researchers rely on for collecting article citations. Web of Science’s indexing goes back to 1900, and Scopus boasts the largest database abstract and citations. The caveat with these repositories is that each resource only gives you a rating based on the range and breadth of journals it indexes. Different journals are recorded in different tools, so you may not be getting a comprehensive metric from either.

Let’s talk about h index

The h index is probably never going away, although it is always being improved upon.

An h index is a complex equation that tells the story of how often a researcher is cited. For instance, if a scholar published six papers, and all six papers were each cited by at least six other authors, they would have an h index of 6.

This equation doesn’t work out too well for an academic who, say, had one paper that was continuously cited – they would still have an h index of 1. Brené Brown, Ph.D., even with her veritable empire of vulnerability and shame related self-help has h index of 7 according to Semantic Scholar.

On to altmetrics

When a psychology professor goes on a morning show to discuss self-esteem of young Black women, for instance, she is not helping her h index. Her societal impact is huge, however.

“When I use altmetrics to deliver a professor his or her impact report, I seek out nontraditional sources like social media. For instance, I check how many shares, comments or likes they received for their research. Or maybe their work was reported in the news,” said Andrea Malone, Research Visibility and Impact Coordinator at the University of Houston Libraries.

Altmetrics aim to answer the question of how academia accounts for the numerous other ways scholarly work impacts our society. What about performances done in the humanities, exhibitions, gallery shows or novels published by creative writers?

Alternative metrics are especially important for research done in the humanities and arts but can offer social science and hard science practitioners a better sense of their scope as well. With the constant connections we foster in our lives, the bubble of social media and such, there is a niche for everyone.

The equalizer

For some, Twitter or Facebook is where they like to publish or advertise their data or results.

“When altmetrics are employed, the general public finds out about research, and is able to comment, share and like. They can talk about it on Twitter. The impact of the work is outside of academia,” said Malone. She even checks a database to see if any of the professor’s works have been included in syllabi around the country.

Academia.edu is another social network offering a platform for publishing and searching scholarly content. It has a fee for premium access, whereas Google Scholar is free. Its profile numbers are usually high because it can pick up any public data – even a slide of a PowerPoint.

The Big Idea

At the University of Houston, altmetrics are categorized thusly: articles, books and book chapters, data, posters, slides and videos. While one would think there’s no downside to recording all of the many places academic work ends up, there are a few things to remember about altmetrics:

  1. They lack a standard definition. But this is being worked on currently by the NISO Alternative Assessment Metrics Initiative.
  2. Altmetrics data are not normalized. Tell a story with your metrics, but don’t compare between two unlike sources. Youtube and Twitter will deliver different insights about your research, but they can’t be compared as though they measure the same exact thing.
  3. They are time-dependent. Don’t be discouraged if an older paper doesn’t have much to show as far as altmetrics. The newer the research, the more likely it will have a social media footprint, for example.
  4. They have known tracking issues. Altmetrics work best with items that have a Digital Object Identifier (DOI).

So have an untraditional go of it and enlist help from a librarian or researcher to determine where your research is making the biggest societal impact.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

These were the most-read guest columns by Houston innovators in 2022

2022 in review

Editor's note: Every week, InnovationMap — Houston's only news source and resource about and for startups — runs one or two guest columns written by tech entrepreneurs, public relations experts, data geniuses, and more. As Houston's innovation ecosystem gets ready for 2023, here are some of this year's top guest contributor pieces — each with pertinent information and advice for startups both at publishing and into the new year. Make sure to click "read more" to continue reading each piece.

Is your New Year's resolution to start contributing? Email natalie@innovationmap.com to learn more.

Houston expert: How to navigate Gen Z's quiet quitting movement at your company

Your perspective on quiet quitting is probably generational, says one Houston expert and startup founder. Photo via Getty Images

This month, the internet has been discussing "quiet quitting," the practice of employees setting hard boundaries about when they work and to what extent they are willing to go beyond the outlined expectations of their jobs.

The conversation around quiet quitting has also been lively at the Ampersand offices. As a training company that is dedicated to training new professionals for employers both big and small, it's critically important for our team to have a good grasp on the relationship employees have with their jobs, and what motivates them to succeed. So we had a long meeting where we discussed what quiet quitting meant to each of us. Read more.

Houston expert shares how small business leaders can encourage PTO use

Retaining employees is no easy feat these days. Encouraging a healthy PTO policy can help avoid burnout. Photo courtesy of Joe Aker

As many small businesses continue to operate in a challenging, fast-paced environment, one thing that has arrived at breakneck speed is midyear, along with the summer months. Theoretically, to ensure work-life balance, most employees should have 50 percent of their PTO remaining to use for summer vacations and during the second half of the year. In reality, that is probably not the case given workers are hesitant to use their PTO, leaving approximately five days of unused PTO on the table during 2020 and 2021.

While the pandemic affected PTO usage the last two years, the labor shortage appears to be a major contributor in 2022, which has led to PTO hoarding and increasing levels of employee burnout. Although these factors can be compounded for small business owners because there are fewer employees to handle daily responsibilities, it is imperative for workers to take PTO, returning recharged with a fresh perspective on the tasks at hand. Read more.

Houston expert: 3 emotional intelligence tips for improving patient-practitioner experience

A Houston expert shares how to improve on communication in the health care setting. Image via Getty Images

After spending hours with healthcare professionals as both a consultant and patient, I know that it takes a special kind of person to take care of others in their most distressing and vulnerable times. That responsibility has been in overdrive because of COVID, causing emotional burnout, which in turn affects patient care. By equipping yourself with emotional intelligence, you can be more resilient for yourself and patients.

Emotional intelligence is keeping your intelligence high, when emotions are high.

Health care sets up an environment for a tornado of emotions, and the rules and regulations centered around patient-provider interactions are often complex to navigate. This leaves many on the brink of emotional exhaustion, and for survival’s sake, depersonalization with patients becomes the status quo. Feeling a disconnect with their patients is another added weight, as few get into this industry for just the paycheck – it’s the impact of helping people get healthy and stay healthy that motivates them. I’ve seen it time and time again with people in my life, as well as on my own patient journey as I battled stage 3 cancer. Read more.

Here's what types of technology is going to disrupt the education sector, says this Houston founder

Edtech is expected to continue to make learning more interactive, fun, and inclusive for people around the world. Photo via Pexels

Technology has always maneuvered education in a certain direction but the COVID-19 pandemic has forced it to shift towards a new direction entirely.

What started off as a basic video lecture turned into a more hybrid and innovative form of education, enabling student engagement and interactivity like never before. Social media forums allow teachers to pay one-on-one attention to students boosting their learning process.

With an edtech boom on the rise, there is a question of what further expansion in educational technology is expected. Here are some technology breakthroughs currently underway in the education sector. Read more.

Houston expert weighs in on marketing from an investor’s perspective

What should Houston startups know about marketing? Photo via Getty Images

Just what do investors want to see from a startup with regards to the company’s marketing? I recently spoke on this topic to a cohort of early-stage technology startup entrepreneurs at Softeq Venture Studio, an accelerator program that helps founders build investable technologies and businesses. Read more.

These elite Houston researchers were named among the most-cited in their fields

MVPs

Nearly 60 scientists and professors from Houston-area universities and institutions, working in fields from ecology to immunology, have been named among the most-cited researchers in the world.

The Clarivate Highly Cited Researchers 2022 list considers a global pool of public academic papers that rank in the top 1 percent of citations for field and publication year in the Web of Science. It then ranks researchers by the number of times their work has been cited, or referenced, by other researchers, which, according to the University of Houston, helps their findings "become more impactful and gain further credibility."

This year 6,938 researchers from 70 different countries were named to this list. About 38 percent of the researchers are based in the U.S.

“Research fuels the race for knowledge and it is important that nations and institutions celebrate the individuals who drive the wheel of innovation. The Highly Cited Researchers list identifies and celebrates exceptional individual researchers who are having a significant impact on the research community as evidenced by the rate at which their work is being cited by their peers," says David Pendlebury, head of research analysis at the Institute for Scientific Information at Clarivate, in a statement. "These individuals are helping to transform human ingenuity into our world’s greatest breakthroughs.”

Harvard University was home to the most researchers, with 233 researchers making the list, far outpacing Stanford University, which had the second highest total of 126 researchers.

Texas universities and institutions had a strong showing, too. The University of Texas at Austin had 31 researchers on the list, tying UT with the University of Minnesota and Peking University in China for the No. 35 spot. MD Anderson had 30 researchers on the list, the most among organizations in Houston, earning it a 38th place ranking, tied with the University of Maryland and University of Michigan.

Below is a list of the Houston-area highly cited researchers and their fields.

From UT MD Anderson Cancer Center

  • Jaffer Ajani (Cross-Field)
  • James P. Allison (Immunology)
  • Jan A. Burger (Clinical Medicine)
  • George Calin (Cross-Field)
  • Jorge Cortes (Clinical Medicine)
  • Courtney DiNardo (Clinical Medicine)
  • John V. Heymach (Clinical Medicine)
  • David Hong (Cross-Field)
  • Gabriel N. Hortobagyi (Cross-Field)
  • Robert R. Jenq (Cross-Field)
  • Hagop M.Kantarjian (Clinical Medicine)
  • Marina Y. Konopleva (Clinical Medicine)
  • Dimitrios P. Kontoyiannis (Cross-Field)
  • Scott E. Kopetz (Clinical Medicine)
  • Alexander J. Lazar (Cross-Field)
  • J. Jack Lee (Cross-Field)
  • Anirban Maitra (Clinical Medicine)
  • Robert Z. Orlowski (Clinical Medicine)
  • Padmanee Sharma (Clinical Medicine and Molecular Biology and Genetics)
  • Anil K. Good (Cross-Field)
  • Jennifer A. Wargo (Molecular Biology and Genetics)
  • William G. Wierda (Clinical Medicine)

From Baylor College of Medicine

  • Erez Lieberman Aiden (Cross-Field)
  • Nadim J. Ajami (Cross-Field)
  • Christie M. Ballantyne (Clinical Medicine)
  • Malcolm K. Brenner (Cross-Field)
  • Hashem B. El-Serag (Clinical Medicine)
  • Richard Gibbs (Cross-Field)
  • Heslop, Helen Cross-Field
  • Joseph Jankovic (Cross-Field)
  • Sheldon L. Kaplan (Immunology)
  • Joseph F. Petrosino (Cross-Field)
  • Cliona Rooney (Cross-Field)
  • James Versalovic (Cross-Field)
  • Bing Zhang (Cross-Field)

From Rice University

  • Plucker M. Ajayan (Materials Science)
  • Pedro J. J. Alvarez (Environment and Ecology)
  • Naomi Halas (Materials Science)
  • Jun Lou (Materials Science)
  • Antonios G. Nikos (Cross-Field)
  • Aditya D. Mohite (Cross-Field)
  • Peter Nordlander (Materials Science)
  • Ramamoorthy Ramesh (Physics)
  • James M. Tour (Materials Science)
  • Robert Vajtai (Materials Science)
  • Haotian Wang (Chemistry)
  • Zhen-Yu Wu (Cross-Field)
  • From University of Houston
  • Jiming Bao (Cross-Field)
  • Shuo Chen (Cross-Field)
  • Whiffing Ren (Cross-Field)
  • Zhu Han (Computer Science)

From UTMB Galveston

  • Vineet D.Menachery (Microbiology)
  • Nikos Vasilakis (Cross-Field
  • Scott C. Weaver (Cross-Field)
  • From UT Health Science Center-Houston
  • Eric Boerwinkle (Cross-Field)

Overheard: Houston experts call for more open innovation at industry-blending event

eavesdropping at the Ion

Open innovation, or the practice of sourcing new technologies and idea across institutions and industries, was top of mind at the annual Pumps & Pipes event earlier this week.

The event, which is put on by an organization of the same name every year, focuses on the intersection of the energy, health care, and aerospace industries. The keynote discussion, with panelists representing each industry, covered several topics, including the importance of open innovation.

If you missed the discussion, check out some key moments from the panel.

“If we want to survive as a city, we need to make sure we can work together.”

Juliana Garaizar of Greentown Labs. "From being competitive, we’ve become collaborative, because the challenges at hand in the world right now is too big to compete," she continues.

“The pace of innovation has changed.”

Steve Rader of NASA. He explains that 90 percent of all scientists who have ever lived are alive on earth today. “If you think you can do it all yourself — and just find all the latest technology yourself, you’re kidding yourself.”

“You can’t close the door. If you do, you’re closing the door to potential opportunities.”

— Michelle Stansbury, Houston Methodist. “If you think you can do it all yourself — and just find all the latest technology yourself, you’re kidding yourself.” She explains that there's an influx of technologies coming in, but what doesn't work now, might work later or for another collaborator. "I would say that health care as a whole hasn’t been very good at sharing all of the things we’ve been creating, but that’s not the case today," she explains.

“The thing that makes Houston great is the same thing that makes open innovation great: diversity.”

— Rader says, adding that this makes for a great opportunity for Houston.

“Some of our greatest innovations that we’ve had come from other industries — not from health tech companies.”

— Stansbury says. "I think that's the piece everyone needs to understand," she says. "Don't just look in your own industry to solve problems."

“Nobody knows what is the best technology — the one that is going to be the new oil."

— Garaizar says. “All of this is going to be a lot of trial and error," she continues. “We don’t have the luxury of time anymore.”