How you can use your data to improve your marketing efforts. Photo via Getty Images

When focusing on revenue growth in business to business companies, analyzing data to develop and optimize strategies is one of the biggest factors in sales and marketing success. However, the process of evaluating B2B data differs significantly from that of B2C, or business to consumer. B2C analysis is often straightforward, focusing on consumer behavior and e-commerce transactions.

Unlike B2C, where customers can make a quick purchase decision with a simple click, the B2B customer journey involves multiple touchpoints and extensive research. B2B buyers will most likely discover a company through an ad or a referral, then navigate through websites, interact with salespeople, and explore different resources before finally making a purchasing decision, often with a committee giving input.

Because a B2B customer journey through the sales pipeline is more indirect, these businesses need to take a more nuanced approach to acquiring and making sense of data.

The expectations of B2B vs. B2C 

It can be tempting to use the same methods of analysis between B2C and B2B data. However, B2B decision-making requires more consideration. Decisions involving enterprise software or other significant business products or services investments are very different from a typical consumer purchase.

B2C marketing emphasizes metrics like conversion rates, click-through rates, and immediate sales. In contrast, B2B marketing success also includes metrics like lead quality, customer lifetime value, and ROI. Understanding the differences helps prevent unrealistic expectations and misinterpretations of data.

Data differences with B2B 

While B2C data analysis often revolves around website analytics and foot traffic in brick and mortar stores, B2B data analysis involves multiple sources. Referrals play a vital role in B2B, as buyers often seek recommendations from industry peers or companies similar to theirs.

Data segmentation in B2B focuses more on job title and job function rather than demographic data. Targeting different audiences within the same company based on their roles — and highlighting specific aspects of products or services that resonate with those different decision-makers — can significantly impact a purchase decision.

The B2B sales cycle is longer because purchases typically involve the input of a salesperson to help buyers with education and comparison. This allows for teams to implement account-based marketing and provides for more engagement which increases the chances of moving prospects down the sales funnel.

Enhancing data capture in B2B analysis 

Many middle-market companies rely heavily on individual knowledge and experience rather than formal data management systems. As the sales and marketing landscape has evolved to be more digital, so must business. Sales professionals can leave and a company must retain the knowledge of the buyers and potential buyers. CRM systems not only collect data, they also provide the history of customer relationships.

Businesses need to capture data at all the various touchpoints, including lead generation, prospect qualification, customer interactions, and order fulfillment. Regular analysis will help with accuracy. The key is to derive actionable insights from the data.

B2B data integration challenges 

Integrating various data sources in B2B data analysis used to be much more difficult. With the advent of business intelligence software such as Tableau and Power BI, data analysis is much more accessible with a less significant investment. Businesses do need access to resources to effectively use the tools.

CRM and ERP systems store a wealth of data, including contact details, interactions, and purchase history. Marketing automation platforms capture additional information from website forms, social media, and email campaigns. Because of these multiple sources, connecting data points and cleansing the data is a necessary step in the process.

When analyzing B2B data for account based marketing (ABM) purposes, there are some unique considerations to keep in mind. Industries like healthcare and financial services, for instance, have specific regulations that dictate how a business can use customer data.

Leveraging B2B data analysis for growth 

B2B data analysis is the foundation for any sales and marketing strategy. Collecting and using data from multiple sources allows revenue teams to uncover gaps, trends, and opportunities for continued growth.

Acknowledging what’s different about B2B data and tracking all of the customer journey touchpoints is important as a business identifies a target market, develops an ideal customer profile, and monitors their competitors. Insights from data also single out gaps in the sales pipeline, use predictive analytics for demand forecasting, and optimize pricing strategies.

This comprehensive approach gives B2B companies the tools they need to make informed decisions, accelerate their sales and marketing efforts, and achieve long-term growth in a competitive market.

------

Libby Covington is a Partner with Craig Group, a technology-enabled sales and marketing advisory firm specializing in revenue growth for middle-market, private-equity-backed portfolio companies.

Every situation is unique and deserves a one-of-the-kind data management plan, not a one-size-fits-all solution. Graphic by Miguel Tovar/University of Houston

Houston research: Why you need a data management plan

Houston voices

Why do you need a data management plan? It mitigates error, increases research integrity and allows your research to be replicated – despite the “replication crisis” that the research enterprise has been wrestling with for some time.

Error

There are many horror stories of researchers losing their data. You can just plain lose your laptop or an external hard drive. Sometimes they are confiscated if you are traveling to another country — and you may not get them back. Some errors are more nuanced. For instance, a COVID-19 repository of contact-traced individuals was missing 16,000 results because Excel can’t exceed 1 million lines per spreadsheet.

Do you think a hard drive is the best repository? Keep in mind that 20 percent of hard drives fail within the first four years. Some researchers merely email their data back and forth and feel like it is “secure” in their inbox.

The human and machine error margins are wide. Continually backing up your results, while good practice, can’t ensure that you won’t lose invaluable research material.

Repositories

According to Reid Boehm, Ph.D., Research Data Management Librarian at the University of Houston Libraries, your best bet is to utilize research data repositories. “The systems and the administrators are focused on file integrity and preservation actions to mitigate loss and they often employ specific metadata fields and documentation with the content,” Boehm says of the repositories. “They usually provide a digital object identifier or other unique ID for a persistent record and access point to these data. It’s just so much less time and worry.”

Integrity

Losing data or being hacked can challenge data integrity. Data breaches do not only compromise research integrity, they can also be extremely expensive! According to Security Intelligence, the global average cost of a data breach in a 2019 study was $3.92 million. That is a 1.5 percent increase from the previous year’s study.

Sample size — how large or small a study was — is another example of how data integrity can affect a study. Retraction Watch removes approximately 1,500 articles annually from prestigious journals for “sloppy science.” One of the main reasons the papers end up being retracted is that the sample size was too small to be a representative group.

Replication

Another metric for measuring data integrity is whether or not the experiment can be replicated. The ability to recreate an experiment is paramount to the scientific enterprise. In a Nature article entitled, 1,500 scientists lift the lid on reproducibility, “73 percent said that they think that at least half of the papers can be trusted, with physicists and chemists generally showing the most confidence.”

However, according to Kelsey Piper at Vox, “an attempt to replicate studies from top journals Nature and Science found that 13 of the 21 results looked at could be reproduced.”

That's so meta

The archivist Jason Scott said, “Metadata is a love note to the future.” Learning how to keep data about data is a critical part of reproducing an experiment.

“While this will be always be determined by a combination of project specifics and disciplinary considerations, descriptive metadata should include as much information about the process as possible,” said Boehm. Details of workflows, any standard operating procedures and parameters of measurement, clear definitions of variables, code and software specifications and versions, and many other signifiers ensure the data will be of use to colleagues in the future.

In other words, making data accessible, useable and reproducible is of the utmost importance. You make reproducing experiments that much easier if you are doing a good job of capturing metadata in a consistent way.

The Big Idea

A data management plan includes storage, curation, archiving and dissemination of research data. Your university’s digital librarian is an invaluable resource. They can answer other tricky questions as well: such as, who does data belong to? And, when a post-doctoral student in your lab leaves the institution, can s/he take their data with them? Every situation is unique and deserves a one-of-the-kind data management plan, not a one-size-fits-all solution.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Here's your university research data management checklist. Graphic by Miguel Tovar/University of Houston

Tips for optimizing data management in research, from a UH expert

Houston voices

A data management plan is invaluable to researchers and to their universities. "You should plan at the outset for managing output long-term," said Reid Boehm, research data management librarian at University of Houston Libraries.

At the University of Houston, research data generated while individuals are pursuing research studies as faculty, staff or students of the University of Houston are to be retained by the institution for a period of three years after submission of the final report. That means there is a lot of data to be managed. But researchers are in luck – there are many resources to help navigate these issues.

Take inventory

Is your data

  • Active (constantly changing) or Inactive (static)
  • Open (public) or Proprietary (for monetary gain)
  • Non-identifiable (no human subjects) or Sensitive (containing personal information)
  • Preservable (to save long term) or To discard in 3 years (not for keeping)
  • Shareable (ready for reuse) or Private (not able to be shared)

The more you understand the kind of data you are generating the easier this step, and the next steps, will be.

Check first

When you are ready to write your plan, the first thing to determine is if your funders or the university have data management plan policy and guidelines. For instance, University of Houston does.

It is also important to distinguish between types of planning documents. For example:

A Data Management Plan (DMP) is a comprehensive, formal document that describes how you will handle your data during the course of your research and at the conclusion of your study or project.

While in some instances, funders or institutions may require a more targeted plan such as a Data Sharing Plan (DSP) that describes how you plan to disseminate your data at the conclusion of a research project.

Consistent questions that DMPs ask include:

  • What is generated?
  • How is it securely handled? and
  • How is it maintained and accessed long-term?

However it's worded, data is critical to every scientific study.

Pre-proposal

Pre-proposal planning resources and support at UH Libraries include a consultation with Boehm. "Each situation is unique and in my role I function as an advocate for researchers to talk through the contextual details, in connection with funder and institutional requirements," stated Boehm. "There are a lot of aspects of data management and dissemination that can be made less complex and more functional long term with a bit of focused planning at the beginning."

When you get started writing, visit the Data Management Plan Tool. This platform helps by providing agency-specific templates and guidance, working with your institutional login and allowing you to submit plans for feedback.

Post-project

Post-project resources and support involve the archiving, curation and the sharing of information. The UH Data Repository archives, preserves and helps to disseminate your data. The repository, the data portion of the institutional repository Cougar ROAR, is open access, free to all UH researchers, provides data sets with a digital object identifier and allows up to 10 GB per project. Most most Federal funding agencies already require this type of documentation (NSF, NASA, USGS and EPA. The NIH will require DMPs by 2023.

Start out strong

Remember, although documentation is due at the beginning of a project/grant proposal, sustained adherence to the plan and related policies is a necessity. We may be distanced socially, but our need to come together around research integrity remains constant. Starting early, getting connected to resources, and sharing as you can through avenues like the data repository are ways to strengthen ourselves and our work.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Greentown Labs names Lawson Gow as its new Houston leader

head of hou

Greentown Labs has named Lawson Gow as its Head of Houston.

Gow is the founder of The Cannon, a coworking space with seven locations in the Houston area, with additional partner spaces. He also recently served as managing partner at Houston-based investment and advisory firm Helium Capital. Gow is the son of David Gow, founder of Energy Capital's parent company, Gow Media.

According to Greentown, Gow will "enhance the founder experience, cultivate strategic partnerships, and accelerate climatetech solutions" in his new role.

“I couldn’t be more excited to join Greentown at this critical moment for the energy transition,” Gow said in a news release. “Greentown has a fantastic track record of supporting entrepreneurs in Houston, Boston, and beyond, and I am eager to keep advancing our mission in the energy transition capital of the world.”

Gow has also held analyst, strategy and advising roles since graduating from Rice University.

“We are thrilled to welcome Lawson to our leadership team,” Georgina Campbell Flatter, CEO of Greentown Labs, added in the release. “Lawson has spent his career building community and championing entrepreneurs, and we look forward to him deepening Greentown’s support of climate and energy startups as our Head of Houston.”

Gow is the latest addition to a series of new hires at Greentown Labs following a leadership shakeup.

Flatter was named as the organization's new CEO in February, replacing Kevin Dutt, Greentown’s interim CEO, who replaced Kevin Knobloch after he announced that he would step down in July 2024 after less than a year in the role.

Greentown also named Naheed Malik its new CFO in January.

Timmeko Moore Love was named the first Houston general manager and senior vice president of Greentown Labs. According to LinkedIn, she left the role in January.

---

This article originally appeared on our sister site, EnergyCapitalHTX.com.

Houston foundation grants $27M to support Texas chemistry research

fresh funding

Houston-based The Welch Foundation has doled out $27 million in its latest round of grants for chemical research, equipment and postdoctoral fellowships.

According to a June announcement, $25.5 million was allocated for the foundation's longstanding research grants, which provide $100,000 per year in funding for three years to full-time, regular tenure or tenure-track faculty members in Texas. The foundation made 85 grants to faculty at 16 Texas institutions for 2025, including:

  • Michael I. Jacobs, assistant professor in the chemistry and biochemistry department at Texas State University, who is investigating the structure and thermodynamics of intrinsically disordered proteins, which could "reveal clues about how life began," according to the foundation.
  • Kendra K. Frederick, assistant professor in the biophysics department at The University of Texas Southwestern Medical Center, who is studying a protein linked to Parkinson’s disease.
  • Jennifer S. Brodbelt, professor in chemistry at The University of Texas at Austin, who is testing a theory called full replica symmetry breaking (fullRSB) on glass-like materials, which has implications for complex systems in physics, chemistry and biology.

Additional funding will be allocated to the Welch Postdoctoral Fellows of the Life Sciences Research Foundation. The program provides three-year fellowships to recent PhD graduates to support clinical research careers in Texas. Two fellows from Rice University and Baylor University will receive $100,000 annually for three years.

The Welch Foundation also issued $975,000 through its equipment grant program to 13 institutions to help them develop "richer laboratory experience(s)." The universities matched funds of $352,346.

Since 1954, the Welch Foundation has contributed over $1.1 billion for Texas-nurtured advancements in chemistry through research grants, endowed chairs and other chemistry-related ventures. Last year, the foundation granted more than $40.5 million in academic research grants, equipment grants and fellowships.

“Through funding basic chemical research, we are actively investing in the future of humankind,” Adam Kuspa, president of The Welch Foundation, said the news release. “We are proud to support so many talented researchers across Texas and continue to be inspired by the important work they complete every day.”

New Houston biotech co. developing capsules for hard-to-treat tumors

biotech breakthroughs

Houston company Sentinel BioTherapeutics has made promising headway in cancer immunotherapy for patients who don’t respond positively to more traditional treatments. New biotech venture creation studio RBL LLC (pronounced “rebel”) recently debuted the company at the 2025 American Society of Clinical Oncology (ASCO) Annual Meeting in Chicago.

Rima Chakrabarti is a neurologist by training. Though she says she’s “passionate about treating the brain,” her greatest fervor currently lies in leading Sentinel as its CEO. Sentinel is RBL’s first clinical venture, and Chakrabarti also serves as cofounder and managing partner of the venture studio.

The team sees an opportunity to use cytokine interleukin-2 (IL-2) capsules to fight many solid tumors for which immunotherapy hasn't been effective in the past. “We plan to develop a pipeline of drugs that way,” Chakrabarti says.

This may all sound brand-new, but Sentinel’s research goes back years to the work of Omid Veiseh, director of the Rice Biotechnology Launch Pad (RBLP). Through another, now-defunct company called Avenge Bio, Veiseh and Paul Wotton — also with RBLP and now RBL’s CEO and chairman of Sentinel — invested close to $45 million in capital toward their promising discovery.

From preclinical data on studies in mice, Avenge was able to manufacture its platform focused on ovarian cancer treatments and test it on 14 human patients. “That's essentially opened the door to understanding the clinical efficacy of this drug as well as it's brought this to the attention of the FDA, such that now we're able to continue that conversation,” says Chakrabarti. She emphasizes the point that Avenge’s demise was not due to the science, but to the company's unsuccessful outsourcing to a Massachusetts management team.

“They hadn't analyzed a lot of the data that we got access to upon the acquisition,” explains Chakrabarti. “When we analyzed the data, we saw this dose-dependent immune activation, very specific upregulation of checkpoints on T cells. We came to understand how effective this agent could be as an immune priming agent in a way that Avenge Bio hadn't been developing this drug.”

Chakrabarti says that Sentinel’s phase II trials are coming soon. They’ll continue their previous work with ovarian cancer, but Chakrabarti says that she also believes that the IL-2 capsules will be effective in the treatment of endometrial cancer. There’s also potential for people with other cancers located in the peritoneal cavity, such as colorectal cancer, gastrointestinal cancer and even primary peritoneal carcinomatosis.

“We're delivering these capsules into the peritoneal cavity and seeing both the safety as well as the immune activation,” Chakrabarti says. “We're seeing that up-regulation of the checkpoint that I mentioned. We're seeing a strong safety signal. This drug was very well-tolerated by patients where IL-2 has always had a challenge in being a well-tolerated drug.”

When phase II will take place is up to the success of Sentinel’s fundraising push. What we do know is that it will be led by Amir Jazaeri at MD Anderson Cancer Center. Part of the goal this summer is also to create an automated cell manufacturing process and prove that Sentinel can store its product long-term.

“This isn’t just another cell therapy,” Chakrabarti says.

"Sentinel's cytokine factory platform is the breakthrough technology that we believe has the potential to define the next era of cancer treatment," adds Wotton.