As a researcher, what is more important to you than a record of your research and scholarship? A Digital Persistent Identifier, or DPI, distinguishes you and your work from that of your peers. Graphic byMiguel Tovar/University of Houston

Every researcher needs a Digital Persistent Identifier.

As a researcher, what is more important to you than a record of your research and scholarship? A Digital Persistent Identifier, or DPI, distinguishes you and your work from that of your peers – and having one will be mandated for those receiving federal funding. Let’s take a deeper look at why this number is so important. We’ll also compare the different platforms— ORCID, Web of Science, Scopus and Google Scholar — so that you can be sure your publications, presentations, peer reviews and even information about who is citing you are being properly stored and accessed.

ORCID

There are many types of profiles and DPIs that can meet your needs, but there’s no silver bullet. Placing your work onto multiple platforms is necessary according to Andrea Malone, Research Visibility and Impact Coordinator at UH Libraries. She cautions researchers to “be realistic about how many identifiers you can maintain.”

The most popular is ORCID, which stands for Open Researcher and Contributor ID. It’s free to set up, and there is no chance of accidentally or on-purpose having multiple ORCID accounts – it’s assigned to you like a social security number and follows you, the researcher. This comes in particularly especially handy for researchers with common names.

An identifier is federally mandated for those receiving governmental funds. It is not specified that ORCID must be that identifier. For example, according to Malone: “a Web of Science profile also assigns an identifier, which would also satisfy the mandate.” But most researchers choose ORCID because it’s publicly available with no access restrictions.

While an ORCID number is free for researchers, there is a subscription fee for an institution to be associated with ORCID. Information will not pre-populate in an ORCID profile and it doesn’t track citation counts – it only shows what you put in. There are, however, linking wizards that allow you to link from Web of Science and Scopus to your ORCID account. If you choose this option, citations will automatically populate in your ORCID profile. It’s up to the researcher to doublecheck to be sure the information has automated, however.

Google Scholar

Google Scholar is a profile, not an identifier, so it does not comply with federal funding requirements. It is free, however, and it pulls from the open web. You can choose to have your list of articles updated automatically, review the updates yourself or manually update your articles at any time. Google Scholar also specifies which articles are open access. A PDF or HTML icon will appear on the righthand side of each citation for one to download articles.

Web of Science Vs. Scopus

Scopus is known for covering more journals and a wider range of metrics to evaluate research impact than Web of Science. Different platforms are a go-to for certain disciplines – for example, Web of Science is usually associated with hard sciences, although investigators in the social sciences and humanities also place their work on this platform from time to time. It’s a good idea to check out which platforms others in your discipline are using for their profiles.

Staying up-to-date

Of course, DPIs don’t work as intended unless researchers keep their profiles current. That means you need to check your profile after every publication and every time you switch to a new institution. Just as you would update your CV, you must update your ORCID or other DPI profile.

One tactic Malone suggests is setting a schedule either biweekly or monthly to check all your profiles. “One thing that’s helpful is that with all of them, you can set up alerts and create an alert as often as you want,” Malone goes on. “At that time, the program will scrawl the content within the source and alert you to anytime any of your publications appear in their database.”

The Big Idea

No one tool can paint a complete picture of all your scholarship. Be strategic and intentional about which platforms you use. Consider your audience, the platforms others in your discipline use and make sure you have an ORCID profile to comply with the federal mandate. But be careful not to sign up for more than you can feasibly maintain and keep current.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.


Just like any workplace, labs can get toxic. Graphic byMiguel Tovar/University of Houston

How to navigate toxic lab workplaces, according to UH research

Houston voices

There are many types of toxic bosses. The Micromanager. The Narcissist. The Incompetent Boss. The list goes on. But labs led by toxic PIs not only make for an abysmal workplace they can actually encourage research misconduct.

According to Charles Wood, author of “When lab leaders take too much control,” there are two types of toxic labs most at risk for this type of behavior: the executive model and the competition model.

Executive model

Wood described the executive approach to lab management as one where the mentor sets expectations for trainees, often with a particular goal in mind. In its negative form, this includes specifying experimental outcomes and instructing trainees on particular experiments to achieve a desired result.

It comes as no surprise that experimenting with the answer already in mind goes against scientific principles. Spiking biological samples, manipulating instruments – all these things have been suspected in labs according to the U.S. government’s Office of Research Integrity. The first line of defense is having the investigators replicate their experiment while being closely supervised. The consequences of misconduct, if the allegations are found to be credible, can include being debarred from further federal funding and having data sequestered.

Competition model

The competition model pits graduate students or postdocs against one another. In this case, whoever gets the result first is rewarded, while the others are punished. This makes a perfect breeding ground for misconduct. Imagine if a foreign student’s citizenship status is affected by whether or not they can produce the results their PI wants them to obtain. Of the competition model, Wood said that what students and postdocs learn can be catastrophic: “competition over collaboration and conformity over creativity.” He posits that researchers graduating from the PI’s toxic lab may be influenced to drop out of science completely or go on to run their own labs in a toxic way.

A correlation between mentors and ethical decision-making

Michael D. Mumford, et al. in “Environmental influences on ethical decision making: Climate and environmental predictors of research integrity” (Ethics & Behavior journal) found that for first-year doctoral students, “environmental experiences (including professional leadership) exert stronger effects on ethical decision making than the climate of the work group.”

Wood also noted that, regardless of the management style, certain scientists may be more prone to cheating. However, active involvement and openness by the principal investigator can serve as a preventive measure against this.

What can you do about it?

Chris Sowers in the “Toxic Boss Syndrome: How To Recover and Get Your Mojo Back” episode of his Better Humans podcast, shared how a few toxic bosses affected his job performance, self esteem and even interpersonal relationships. His first piece of advice is to get out quickly, even if you need to take a pay cut – he says a few thousand dollars are not worth the hit to your mental and physical health.

Vetting your lab’s PI will help enormously. Does the PI have a good track record of being a fair and kind mentor?

“If your principal investigator starts to exhibit toxic behavior, address this with him or her,” said Wood. He goes on to advise that “if you find yourself in a truly toxic environment, seek guidance from a graduate coordinator, assistant dean or other authority figure who oversees the pre- or postdoctoral training programs — and ask for help in finding another mentor.”

The Big Idea

No one has time or energy to dedicate to a toxic workplace. The costs are way too high to risk manipulating data. For one, all authors on a paper will be held responsible for the misconduct– not to mention the physical and mental stress a toxic lab will invite into your life.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

"ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.” Graphic byMiguel Tovar/University of Houston

Houston expert: Analyzing the impact of generative AI on research

houston voices

Researchers have to write extremely specific papers that require higher-order thinking — will an intuitive AI program like OpenAI’s ChatGPT be able to imitate the vocabulary, grammar and most importantly, content, that a scientist or researcher would want to publish? And should it be able to?

University of Houston’s Executive Director of the Research Integrity and Oversight (RIO) Office Kirstin Holzschuh puts it this way: “Scientists are out-of-the box thinkers – which is why they are so important to advancements in so many areas. ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.”

“A toy, not a tool”

The Atlantic published, “ChatGPT Is Dumber Than You Think,” with a subtitle advising readers to “Treat it like a toy, not a tool.” The author, Ian Bogost, indulged in the already tired trope of asking ChatGPT to write about “ChatGPT in the style of Ian Bogost.” The unimaginative but overall passable introduction to his article was proof that, “any responses it generates are likely to be shallow and lacking in depth and insight.”

Bogost expressed qualms similar to those of Ezra Klein, the podcaster behind, “A Skeptical Take on the AI Revolution.” Klein and his guest, NYU psychology and neural science professor Gary Marcus, mostly questioned the reliability and truthfulness of the chatbot. Marcus calls the synthesizing of its databases and the “original” text it produces nothing more than “cut and paste” and “pastiche.” The algorithm used by the program has been likened to auto-completion, as well.

However, practical use cases are increasingly emerging, which blur the lines between technological novelty and professional utility. Whether writing working programming code or spitting out a rough draft of an essay, ChatGPT does have a formidable array of competencies. Even if just how competent it is remains to be seen. All this means that as researchers look for efficiencies in their work, ChatGPT and other AI tools will become increasingly appealing as they mature.

Pseudo-science and reproducibility

The Big Idea reached out to experts across the country to determine what might be the most pressing problems and what might be potential successes for research now that ChatGPT is readily accessible.

Holzschuh, stated that there are potential uses, but also potential misuses of ChatGPT in research: “AI’s usefulness in compiling research proposals or manuscripts is currently limited by the strength of its ability to differentiate true science from pseudo-science. From where does the bot pull its conclusions – peer-reviewed journals or internet ‘science’ with no basis in reproducibility?” It’s “likely a combination of both,” she says. Without clear attribution, ChatGPT is problematic as an information source.

Camille Nebeker is the Director of Research Ethics at University of California, San Diego, and a professor who specializes in human research ethics applied to emerging technologies. Nebeker agrees that because there is no way of citing the original sources that the chatbot is trained on, researchers need to be cautious about accepting the results it produces. That said, ChatGPT could help to avoid self-plagiarism, which could be a benefit to researchers. “With any use of technologies in research, whether they be chatbots or social media platforms or wearable sensors, researchers need to be aware of both the benefits and risks.”

Nebeker’s research team at UC San Diego is conducting research to examine the ethical, legal and social implications of digital health research, including studies that are using machine learning and artificial intelligence to advance human health and wellbeing.

Co-authorship

The conventional wisdom in academia is “when in doubt, cite your source.” ChatGPT even provides some language authors can use when acknowledging their use of the tool in their work: “The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.” A short catchall statement in your paper will likely not pass muster.

Even when being as transparent as possible about how AI might be used in the course of research or in development of a manuscript, the question of authorship is still fraught. Holden Thorp, editor-in-chief of the Science, writes in Nature, that “we would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism.” Thorp went on to say that a co-author of an experiment must both consent to being a co-author and take responsibility for a study. “It’s really that second part on which the idea of giving an AI tool co-authorship really hits a roadblock,” Thorp said.

Informed consent

On NBC News, Camille Nebeker stated that she was concerned there was no informed consent given by the participants of a study that evaluated the use of a ChatGPT to support responses given to people using Koko, a mental health wellness program. ChatGPT wrote responses either in whole or in part to the participants seeking advice. “Informed consent is incredibly important for traditional research,” she said. If the company is not receiving federal money for the research, there isn’t requirement to obtain informed consent. “[Consent] is a cornerstone of ethical practices, but when you don’t have the requirement to do that, people could be involved in research without their consent, and that may compromise public trust in research.”

Nebeker went on to say that study information that is conveyed to a prospective research participant via the informed consent process may be improved with ChatGPT. For instance, understanding complex study information could be a barrier to informed consent and make voluntary participation in research more challenging. Research projects involve high-level vocabulary and comprehension, but informed consent is not valid if the participant can’t understand the risks, etc. “There is readability software, but it only rates the grade-level of the narrative, it does not rewrite any text for you,” Nebeker said. She believes that one could input an informed consent communication into ChatGPT and ask for it to be rewritten at a sixth to eighth grade level (which is the range that Institutional Review Boards prefer.)

Can it be used equitably?

Faculty from the Stanford Accelerator for Learning, like Victor Lee, are already strategizing ways for intuitive AI to be used. Says Lee, “We need the use of this technology to be ethical, equitable, and accountable.”

Stanford’s approach will involve scheduling listening sessions and other opportunities to gather expertise directly from educators as to how to strike an effective balance between the use of these innovative technologies and its academic mission.

The Big Idea

Perhaps to sum it up best, Holzschuh concluded her take on the matter with this thought: “I believe we must proceed with significant caution in any but the most basic endeavors related to research proposals and manuscripts at this point until bot filters significantly mature.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Understanding the Fly America Act is important for all researchers planning government-funded travel. Graphic byMiguel Tovar/University of Houston

What Houston researchers should know about the Fly America Act

houston voices

Commercial aviation witnessed a transformative shift following World War II. Initially reserved for military purposes, commercial air travel began to flourish as civilians embraced its convenience. This surge in air travel highlighted the necessity for regulating the industry.

In response, the Federal Aviation Administration, or FAA, emerged from the Federal Aviation Act of 1958, later becoming a component of the Department of Transportation under the Department of Transportation Act in 1967.

The evolution of air travel regulation continued in 1974 with the enactment of the Fly America Act. Designed to safeguard U.S. interests in international air travel funded by the government, this act prioritizes U.S. airline carriers. This initiative serves both to support domestic airlines and promote the U.S. aviation industry on a global stage when passengers travel on federal funds.

What some might not know is this legislation can impact researchers and their organizations.

Importance for researchers

Adhering to the Fly America Act applies not only to federal government employees but also their dependents, grantees, and other travelers funded by federal resources. Even foreign researchers visiting the U.S. under federally funded grants must choose U.S. flag air carriers for their travel.

A U.S. flag air carrier should not be confused with a traditional flag carrier. These are airlines that have historically been government-owned or are otherwise closely tied to the identity of a particular country, like British Airways or Aeroméxico. U.S. flag air carriers encompass a wide range of airlines, including smaller entities like Air Wisconsin Airlines and Avelo, a new carrier based in Houston. You can find a comprehensive list of U.S. flag air carriers here.

Navigating exceptions

While the Fly America Act carries strict guidelines, exceptions do exist. Instances where no U.S. flag air carriers serve the destination or where such carriers would extend the trip by over 24 hours warrant special consideration. In these cases, maintaining meticulous records is essential in order to validate the use of non-U.S. airlines. A list of exceptions can be found here.

Open Skies Agreements introduce another facet to the Fly America Act. These agreements between the U.S. Government and other countries enable travelers, including researchers, to use foreign air carriers for government-funded international travel. Several countries, including those in the European Union, Australia, Switzerland, and Japan, maintain Open Skies Agreements. Flights on British Airways are no longer permitted under an Open Skies Agreement due to the United Kingdom’s exit from the European Union. Proper documentation is essential when claiming a Fly America Act exception, even if covered under an Open Skies Agreement. Detailed travel itineraries, internal agency forms, and evidence of a Fly America exception must be included in travel receipts.

The Big Idea

Compliance with the Fly America Act ensures your travel expenses are reimbursable on government grants. It’s important to remember that cost and convenience are not exceptions to the act. A thorough understanding of Fly America Act’s provisions and exceptions is a must before you book your next flight.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

How this UH pharmacologist tapped into humanities to improve health outcomes in her study. Graphic byMiguel Tovar/University of Houston

Integrating humanities can improve health outcomes, Houston researcher finds

houston voices

The humanities, encompassing language, literature, arts, and history, are often seen as separate from scientific research in universities.

However, they hold the potential to contribute to improved health outcomes by fostering empathy, understanding of cultural context, and enhancing human connections. Meghana Trivedi, an associate professor of Pharmacy Practice and Translational Research at the University of Houston, has embarked on a research project to explore this transformative potential.

Using media to improve medicine adherence

Trivedi, a pharmacologist focusing on developing new drugs for breast cancer treatment, noticed a common issue among breast cancer patients — non-adherence to medication instructions, particularly among minority patients with low socioeconomic status. This non-adherence increases the risk of recurrence and poses a significant national problem.

Trivedi sought to understand the reasons behind this non-adherence and discovered that traditional educational pamphlets were ineffective in improving medication adherence. Instead, she decided to explore a theater-based approach.

Trivedi’s research involves developing a culturally sensitive educational video for breast cancer patients, written by a local African American playwright in collaboration with the clinical team and input from African American survivors. Additionally, her team is testing the effectiveness of motivational interviewing, where pharmacy students personally engage with patients to address barriers to adherence and suggest solutions. Trivedi believes that these approaches, rooted in the humanities, will demonstrate their impact on health outcomes.

The role of humanities in team science

Recognizing the value of interdisciplinary collaboration, Trivedi incorporated the arts and humanities into her research by partnering with professors from the School of Theatre and Dance and the Valenti School of Communication at UH. This collaboration has highlighted the importance of incorporating arts into STEM fields, leading Trivedi to advocate for STEAM (Science, Technology, Engineering, Arts, and Mathematics).

Trivedi emphasizes that effective team science requires shared interest in the project, transparency, and honesty among team members. “We are a very efficient team working to achieve a common and important goal – to improve treatment outcomes and survival in patients.”

The Big Idea

Trivedi’s research underscores the role of social and cultural factors in medication adherence among specific patient groups. By posing her research question outside of her discipline, Trivedi engaged new collaborators who became invested in — and contributed directly to — positive health outcomes. This interdisciplinary approach, combining the insights of the humanities and the collaborative nature of team science, facilitated the development of novel solutions to enhance medication adherence.

------

This article originally appeared on the University of Houston's The Big Idea. Cory Thaxton, the author of this piece, is the communications coordinator for The Division of Research.

The process of breaking up research is dangerous one, according to UH's Big Idea. Graphic byMiguel Tovar/University of Houston

University of Houston: Why breaking up research studies does more harm than good

houston voices

Salami slicing, breaking a paper on a single study up into smaller “slices” and publishing them in more than one journal, is broadly discouraged and considered unethical. Why does the practice persist? What do PIs believe are the benefits of doing it?

Two problems

Breaking up research into smaller slices can have serious consequences for scientific integrity. Researchers, especially younger researchers, may get used to looking at data in smaller pieces and not as a whole. This is dangerous from an academic perspective as valuable conclusions, that could have been derived if the data were presented as a whole, are overlooked.

Further, salami slicing of data may do more harm than good to a researcher’s career over time because it significantly reduces their chances of publishing in high impact journals, thereby lessening the weight of their accrued body of work.

One reason salami slicing still persists, is that there is a veritable avalanche of papers vying for publication. And the number seems to be steadily increasing.

“The academic market became more competitive after the nation’s economic downturn, in 2008,” said Rodica Damian, UH associate professor of psychology. “We saw a lot of competition between those with Ph.D.s and those who were conducting postdoc research. Before, you needed a postdoc if you were in Biology, for instance – but you didn’t need one if you had a doctorate in Psychology. That is no longer the case.”

Another reason salami slicing might persist is that advisors may suggest to a graduate student that they write a series of simpler papers as opposed to a more complex paper consisting of multiple measurements. A researcher might get these “single-lens papers” published much more quickly than their multi-faceted counterparts, due to the amount of background research the journal’s editors need to do on the more complicated papers.

How to avoid self-plagiarism

Salami slicing is not necessarily self-plagiarism, but often the practice does feature a large amount of “text overlap,” according to Miguel Roig, Ph.D. on the website of the Office of Research Integrity for the U.S. Department of Health and Human Services. One example Roig gives is as follows:

“Several months ago, for example, we received a manuscript describing a controlled intervention in a birthing center. The authors sent the results on the mothers to us, and the results on the infants to another journal. The two outcomes would have more appropriately been reported together…The important point is that readers need to be made aware that the data being reported were collected in the context of a larger study.”

The Big Idea

An article published by the NIH suggests this rule of thumb: “If the ‘slice’ of the study in question tests a different hypothesis as opposed to the larger study or has a distinct methodology or populations being studied, then it is acceptable to publish it separately.”

However, when a colleague is trying to do a meta analysis, they need to know what your study actually measured. “One thing you can do to avoid salami slicing,” said Damian, “is to pre-register all the projects you’re planning to do from a specific data set. Then ask yourself, do they use different hypotheses, measures, literatures, etc.”

After all is said and done, are they substantively methodically different research papers? If so, they can be sent to different, separate journals.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Report: Houston secures spot on list of top 50 startup cities

by the numbers

A new ranking signals great promise for the growth of Houston’s startup network.

Houston ranks among the world’s top 50 startup cities on a new list from PitchBook, a provider of data and research about capital markets. In fact, Houston comes in at No. 50 in the ranking. But if you dig deeper into the data, Houston comes out on top in one key category.

The city earns a growth score of 63.8 out of 100 — the highest growth score of any U.S. city and the seventh highest growth score in the world. In the growth bucket, Houston sits between between Paris (64.4) and Washington, D.C. (61.7).

The PitchBook growth score reflects short-term, midterm, and long-term growth momentum for activity surrounding venture capital deals, exits, and fundraising for the past six years.

PitchBook’s highest growth score (86.5) goes to Hefei, a Chinese manufacturing hub for electric vehicles, solar panels, liquid crystal displays, home appliances, and Lenovo computers.

The overall ranking is based on a scoring system that relies on proprietary PitchBook data about private companies. The system’s growth and development scores are based on data related to deals, exits, fundraising and other factors.

Houston earns a development score of 34.1 out of 100, which puts it in 50th place globally in that regard. This score measures the size and maturity of a city’s startup network.

Topping the overall list is San Francisco, followed by New York City and Beijing. Elsewhere in Texas, Austin appears at No. 16 and Dallas at No. 36.

The ranking “helps founders, operators, and investors assess locations when deciding where to expand or invest,” says PitchBook.

“Network effects matter in venture capital: Investors get more than half of their deals through referrals, according to research led by Harvard professor Paul Gompers,” PitchBook goes on to say. “So it stands to reason that dealmakers should seek these networks out when deciding where to do business.”

4 Houston universities earn top spots for graduate programs in Texas

top schools

Houston's top-tier universities have done it again. U.S. News and World Report has four Houston-area universities among the best grad schools in the state, with some departments landing among the top 100 in the country.

U.S. News publishes its annual national "Best Graduate Schools" rankings, which look at several programs including business, education, engineering, fine arts, health, and many others. For the 2024 report, the publication decided to withhold its rankings for engineering and medical schools. It also changed the methodology for ranking business schools by adding a new "salary indicator" based on a graduate's profession.

U.S. News also added new rankings for doctoral and master's programs in several medical fields for the first time in four years, or even longer in some cases. New specialty program rankings include audiology, occupational therapy, physical therapy, pharmacy, nurse midwifery, speech-language pathology, nurse anesthesia, and social work.

"Depending on the job or field, earning a graduate degree may lead to higher earnings, career advancement and specialized skill development," wrote Sarah Wood, a U.S. News Education reporter. "But with several types of degrees and hundreds of graduate schools, it can be difficult to narrow down the options."

Without further ado, here's how the local schools ranked:

Rice University's Jesse H. Jones Graduate School of Business maintained its position as No. 2 in Texas, but slipped from its former No. 24 spot in the 2023 report to No. 29 overall in the nation in 2024. Its entrepreneurship program tied for No. 8 in the U.S, while its part-time MBA program ranked No. 15 overall.

Houston's University of Texas Health Science Centerearned the No. 3 spots in Texas for its masters and doctorate nursing programs, with the programs earning the No. 31 and No. 45 spots overall in the nation. The school ranked No. 25 nationally in the ranking of Best Public Health schools, and No. 36 for its nursing-anesthesia program.

Prairie View A&M University's Northwest Houston Center ranked No. 5 in Texas and No. 117 in the nation for its master's nursing program. Its Doctor of Nursing Practice program ranked No. 8 statewide, and No. 139 nationally.

The University of Houstonmoved up one spot to claim No. 4 spot in Texas for its graduate education program, and improved by seven spots to claim No. 63 nationally. Its graduate business school also performed better than last year to claim No. 56 in the nation, according to the report. The University of Houston Law Center is the fifth best in Texas, and 68th best in the U.S. Most notably, its health care law program earned top nods for being the seventh best in the country.

Among the new specialty program rankings, UH's pharmacy school ranked No. 41 nationally, while the speech-language pathology program earned No. 44 overall. The graduate social work and public affairs programs ranked No. 67 and No. 76, respectively, in the nation.

The full list of best graduate schools can be found on usnews.com.

------

This article originally ran on CultureMap.