Rights-tech Startups Driving Blockchain Investment

The data-visualization folks at Quid have been crunching some numbers on investment in  blockchain applications and they’ve come up with some interesting charts.

The researchers identified 450 venture-backed companies using some form of blockchain technology. And as you would expect, the biggest recipients of VC money to date have been Bitcoin miners, cryptocurrency exchanges, and companies involved in financial services of one sort or another.

But when Quid stripped out the fin-tech firms a very different picture emerged. Four of the top 10 recipients are rights-tech companies, including ascribe, artCOA, Monegraph, and Revelator.

The scale of the investments in non-finance related startups is far smaller than in fin-tech firms, of course. The top rights-tech company on the list, ascribe, has raised about $6 million to date, compared to more than $130 million for 21 Inc., which sells Bitcoin mining computers. But as Quid notes, its analysis “suggests that blockchain has big potential to transform a variety of industries, particularly those that rely heavily on data authentication and verification, including healthcare and digital media.”

 

A Chunk of History: The Medieval Roots of Digital Publishing

This blog post originally appeared in Concurrent Media.

One of the wonderful paradoxes of the digital era of media is its retrograde quality. We tend to think of inventions like the internet and peer-to-peer digital networks as apotheoses of modern communication, but their economic impact on many media industries has been to unravel their modern industrial structures and to resurrect many of their pre-industrial, folk foundations.

Nowhere has that been more true than in the case of music. MP3 files, P2P networks, and now streaming have blown up the multi-song bundle we called the album — and the profit margins that came with it — and restored the single to prominence, as it was in the days before the invention of the long-playing record (LP).

The much-derided phenomenon of unlicensed “sharing” of music over P2P networks also carries echoes of music’s past. Until the Gramophone and the Phonograph made private performances of music practical, music was almost always shared, in the sense that it was usually experienced as part of a public performance. While the industrial technologies of recording and playback made private performances lucrative the instinct to share music never really went away.

Even modern notions of musical authorship are in part a function of industrial technology and are now being challenged by digital technology. Prior to recording, many forms of folk music (think traditional American blues) held standard lyrical tropes and even entire verses as part of a commons that were recycled and rearranged by performers as needed. It wasn’t until recording technology enabled the fixation of a canonical version of a performance that many folk artists began to think seriously about authorship.

Today, EDM DJs treat recordings as part of a commons, recycling and reassembling their elements into unique performances.

The film and television industries haven’t experienced the same retrograde dislocation as the music industry has, in part because the media themselves are products of industrial technology. Film and TV have no pre-industrial past to resurrect. But even then, they have felt the tug of digital technology against the industrial economics of bundling, as programs are disaggregated from channels and channels are disaggregated from pay-TV tiers.

As a sometime-student of media history, I came across a fascinating recent example of digital technology’s pre-industrial DNA in an interview with David Hetherington, North American COO of Klopotek, ahead of the upcoming London Book Fair.

Klopotek AG, is a German software company that provides CMS and rights management technology to the book publishing industry around the world. Here is Hetherington’s description of one way Klopotek helps academic publishers monetize their works, taken from Publishing Perspectives:

“Typically, thinking in the book business,” Hetherington says, “has started with the book. And I think our view of it is that it really has to start with the grain of content…

“So the idea of taking content from various products and pulling it together means that the initial block of content is no longer sold as it is. It means that the content is able to be parsed and re-assembled.

“The users—whoever wants to re-assemble that content—can identify the pieces they want, can specify the part numbers. This, in effect, means that the owner of the content must give it a unique part number and pass that part number to the potential market.

“At times, in some markets, this has been called the “chunking” of a book, breaking it into salable sections that fit users’ needs. Nowhere is this more easily understood than on campuses, where professors can effectively build their own textbooks for courses by piecing together parts of existing works, a “chunk” at a time, to match the needs of a given set of students.

Klopotek’s sophisticated software helps publishers “chunk” their books and license the chunks separately into customized bundles. It creates a licensed alternative to the “course pack,” in which professors would assemble their custom bundles at Kinkos and then distribute them to students. It also provides a defense against book rentals and used-book sales by providing students an affordable option.

Any 14th Century university scholar, however, would immediately recognize Hetherington’s description as an example of the pecia system.

With the rise of Medieval universities in Europe, the demand for books for use by students increased dramatically. But in the years before Gutenberg made it possible to produce identical copies of texts at scale, reproducing books was a laborious, manual process, carried out mostly by monks or itinerant scribes, and incapable of meeting the demand.

A solution emerged in Italy in the 13th Century and spread quickly to other countries. Books were chunked into pieces (pecia) for copying by individual students, and the pieces were then passed around in what amounted to a peer-to-peer network until each student was able to assemble all parts of the text required for his course work (women were not permitted to attend university).

The system became formalized and regulated in the early 14th Century, beginning at the University of Paris. Certain book mongers were licensed to provide students with pecia rentals for copying taken from master texts certified by members of the university faculty. The rates that could be charged for each work were set by the university, and as demand grew and more master copies were needed to supply pecia, the texts were regularly inspected by scholars to make sure they did not become corrupted through the accumulation of copying errors.

The scholars who oversaw the pecia system were not concerned with authorship per se, of course, let alone droit d’auteur. The concept barely existed at the time, and in any case the texts in question were mostly classical or the works of the early Church fathers. The scholars’ interests were pedagogy and preserving the integrity of the texts, not rights management. But it shows that the use of chunking to affect the economics of academic publishing has a long history.

The printing press, many early examples of which were established in university towns, eventually did away with the need for the pecia system by introducing industrial economies of scale to the reproduction of books, although the system survived well into the 16th Century in some areas.

The mechanical press made the complete text the anatomical unit of the commercial publishing industry — “starting with the book,” in Hetherington’s formulation. But it wasn’t always that way, and with digital technology it need not be that way now.

dotBlockchain Music: Data Before Database

The dotBlockchain Music Project (dotBC), an ambitious effort to create an open-source data framework for sound recordings and musical compositions, received a major boost last week with the announcement that four industry partners have signed on to support the initiative: Canadian performing rights organization SOCAN and its rights administration subsidiary MediaNet; publishing royalty administrator Songtrust; independent music distributor CD Baby; and digital rights service FUGA.

The new partners, the first for dotBlockchain, will bring a catalog of more than 65 million recordings into the dotBC ecosystem, and will add another 500,000 new recordings a month, according to the announcement.

According to dotBlockchain co-founder Benji Rogers, the four partners were recruited in part because they represent most of the critical links in the music value chain: PRO, distribution, rights administration, and technology platform. dotBlockchain is also working with music publishers and leading digital service providers on joining the initiative, according to Rogers, but those partners are not yet ready to go public with their participation.

The of the dotBlockchain Project is to create a technical framework for permanently binding data on authorship and ownership of musical compositions to individual sound recordings. That package of sound file and ownership information could then serve as the foundation for others in the music value chain to layer on additional metadata related to their involvement in or uses of the work, such as the date of the recording and the identities of the musicians involved, and the date of and artists involved in any subsequent recordings of the same work.

If all goes according to plan, the system would provide an unbroken chain of data from any use of a work, such as streaming a recording of it, back to the original authors and rights owners, and to anyone due money for use (see the video below for a visual representation of how it’s meant to work).

Getting a real-world catalog of publishing information to work with was key to the next phase in the development of the dotBC ecosystem, Rogers told RightsTech.com.

“The most important when you’re trying to bootstrap something like this is you have to have a base level to start from. We needed actual sound recordings to work with,” Rogers to RightsTech.com.

SOCAN and CD Baby will provide the data on those recordings.

“We can now say, this is where the sound recordings are, and here is the publishing information,” Rogers said. “And now, a DSP can have all of that information for every stream.”

Rogers hopes that ground-up approach will allow dotBlockchain to success where other efforts to create a comprehensive library of ownership data have failed, such as the now-abandoned Global Repertoire Database initiative.

“Every other proposal for how to do this has been database-first. We felt this had to be publishing-first and then you build out from there,” Rogers said.

Rather than building and hosting its own database, in fact,  dotBC will use the public blockchain to register information, eliminating questions about ownership of the data and who would have access to it.

“This will give publishers much better visibility into how their works and being used and will put them on much more equal footing with other rights holders.”

With last week’s announcement the dotBlockchain Project officially entered Phase 2 of its three-part development plan, according to Rogers. Phase 1 included open-sourcing its code base and creating “wrapper” codes for binding ownership information to sound files. Phase 2, which Rogers described as a sort of “sandbox” phase, will let interested parties model real-world examples of what a finished dotBC file would look like and to test the robustness of the data chain. It’s scheduled to run through the third quarter of this year.

“I think by the late summer there will be a fair number of real dotBCs in the world,” Rogers said.

Phase 3, currently scheduled to begin by the end of the year, would involve implementing the system in the wild.

 

Waiting for Zuck

Facebook last week snared long-time music industry attorney Tamara Hrivnak to head up its global music strategy and business development, luring her away from YouTube, where she had served as director of music partnerships.

The hire, which comes just weeks after reports surfaced that Facebook is moving forward with the development of its Rights Manager tool for identifying infringing content on its platform and had begun preliminary discussions with the major record labels and music publishers about securing licenses to host music on the site, further raised expectations that the giant social media network is gearing up for a major push in music.

Tamara Hrivnak

“Music is important and it matters – it connects us and binds us to times, places, feelings and friends,” Hrivnak said in a statement announcing her hire. “My career has been dedicated to growing opportunities for music in the digital landscape.”

That’s music to the ears of rights owners and distributors, who see in Facebook both a serious and growing problem of copyright infringement, but also a potentially major opportunity.

At a RightsTech panel on user-generated content platforms during the Digital Entertainment World conference last week, speakers representing multichannel networks, rights-management providers, and distributors were unanimous in identifying Facebook’s apparent moves into content licensing as the most important development to watch in the UGC space over the next year.

Ever since Facebook moved video front and center in users’ news feed it has become a popular destination for music videos, particularly user-created covers and lip-dubs of popular songs. None of the current is currently licensed, however, and so generates no revenue for the original rights owners.

The site has increasingly come under criticism from rights owners for failing to adequately address the growing amount of infringing content it hosts. Facebook also generates huge numbers of share-driven views of licensed content scraped from YouTube and other outlets without compensation to rights owners.

“With views in the millions, it’s time for Facebook to answer songwriters’ friend request and properly license their platform,” National Music Publishers Association president David Isrealite wrote in a Billboard op-ed in October.  “Otherwise, it may find itself de-friended by the music industry.”

The reports that Facebook is developing a more robust version of Rights Manager, and the hiring of Hrivak were sign as responses, at least in part, to such criticism.

But Isrealite’s op-ed also hinted, perhaps unintentionally, at the potential scale of the opportunity Facebook presents to rights owners if currently infringing content can be brought within the purview of licenses.

“[W]hile we don’t know the full extent of what is posted, we do know that engagement and viewership on Facebook often outpaces other social media video platforms.,” Isrealite wrote. “In fact, in a recent study of the popularity of copies of Adele’s music videos, of the 60,055 copies of ‘Hello’ found that while ‘Facebook had only 64 percent of the number of copies published to YouTube, Facebook still garnered over two times more video views than YouTube. On average, Facebook racked up 73,083 views per video, whereas each YouTube amassed an average of 23,095 views per video.'”

Currently the most widely used streaming platform for music, YouTube has emerged as the bête noire of music rights owners over what many view as paltry royalty payouts relative to the volume of usage it attracts. Apart from complaining about it, however, rights owners have so far had little practical impact on YouTube’s policies up to now, largely because they have lacked the leverage of an alternative.

Few other platforms have the scale to mount a serious challenge to YouTube. But one of the few that does is Facebook. If Facebook were to strike more artist-friendly deals with the record labels and publishers the data cited by Isrealite suggests it could emerge as a potent counterweight to YouTube — the main reason the DEW panelists cited for their (cautious) optimism.

Speaking to analysts during an earnings call last week, Facebook CEO Mark Zuckerberg offered rights owners still more reason for optimism.

“We’re focusing more on shorter-form content to start,” Zuckerberg said. “There is the type of content that people will produce socially for friends. There’s promotional content that businesses and celebrities and folks will produce. But there’s also a whole class of premium content. The creators need to get paid a good amount in order to support the creation of that content, and we need to be able to support that with a business model, which we’re working on through ads to fund that.”

Much will depend on the development of the revamped Rights Manager. While reportedly modeled on YouTube’s Content ID, few details of how it will work have surfaced to date.

For all of the complaints directed at YouTube over its payouts to artists, Google spent years and (it claims) more than $60 million to develop Content ID, which even YouTube’s critics acknowledge has evolved into a sophisticated and robust piece of engineering.

Facebook certainly has both the financial and engineering resources to throw at the problem. But development still takes time, as will building up the vast library of  reference files Content ID uses to match against posted content.

Without an effective content recognition system even the most favorable license agreements are bound to prove disappointing to rights owners. But if anyone can change the marketplace dynamics of ad-supported streaming it’s Facebook.

Facebook Takes Aim at the ‘Value Gap’

Facebook is developing a system to automatically identify copyrighted works posted to its massive social network similar to YouTube’s Content ID system, according to a report in the Financial Times (here’s Billboard’s rewrite of the paywalled FT story).

Word of the move comes just weeks after an op-ed by National Music Publishers Association head David Isrealite appeared in Billboard calling on Facebook to address a growing infringement problem on the network, particularly with respect to user-posted videos featuring cover versions of songs that were never properly licensed from the publisher.

“In a recent snapshot search of 33 of today’s top songs, NMPA identified 887 videos using those songs with over 619 million views, which amounts to an average of nearly 700,000 views per video,” Isrealite wrote. “In reality, the scope of the problem is likely much greater because, due to privacy settings on Facebook, it’s almost impossible to gauge the true scale.”

Up to now, Facebook has generally fallen back on the DMCA safe harbor to deal with copyrighted work posted without a license to its platform, removing infringing material when requested by a rights owner but not actively policing copyrighted content uploaded to its platform. According to Billboard, however, Facebook has begun discussions with the major record companies about licensing content directly, although those talks are apparently in the early stages.

Facebook actually rolled out a tool called Rights Manager last year to help rights owners keep their copyrighted works off the network, but that system was mainly designed to address the problem of video “freebooting,” in which Facebook users take videos from YouTube and other sources and post them to their walls, often generating millions of views without compensation to the rights owner.

According to this week’s published reports, the new system is aimed more at policing music use on the platform, and seems driven at least in part by Facebook’s desire to avoid the sort of sustained, naming-and-shaming campaign the music industry has mounted against YouTube over the so-called value gap.

What’s not clear from the published reports is what Facebook has in mind for what to do about the unlicensed content the new system identifies. But here’s hoping it doesn’t follow YouTube’s example too slavishly.

YouTube’s Content ID essentially offers rights owners a binary choice: take the content down, or leave it up and let YouTube run ads against it on terms set by YouTube. What YouTube doesn’t really offer rights owners is a means to effectively engage with users who are viewing or posting the content.

Facebook has an opportunity to offer rights owners a much richer environment to engage with music fans. If someone has gone to the trouble of covering your song and making a video of it, they’re probably a fan. And when they post it publicly on their Facebook wall you know exactly who they are. Even if the user shares the content only with his or her friends, Facebook knows who they are and it knows a lot about who their friends and other connections are.

More important, Facebook has the means to allow artists to engage directly with those fans and potential fans. Such engagement may have limited appeal to songwriters and publishers, but it could prove to be a boon to recording artists and labels by literally putting a face on their fans.

Even for songwriters and publishers, the type and volume of data Facebook’s new system could potentially yield on how, where, and how often their content is being consumed could be valuable.

In short, Facebook has a chance to bridge the value gap by offering rights owners more choices than simply take-down or hand-me-down monetization.

 

Solving Fractions: Rights-Tech Startup Offers an ‘Alternative’ to ASCAP, BMI

The U.S. Department of Justice last month notified the federal Second Circuit Court of Appeals of its intent to appeal a lower court ruling that effectively overturned the department’s controversial decision barring fractional licensing by ASCAP and BMI.

The appeal all but guarantees that the question of whether the performance rights organizations (PROs) must offer so-called 100 percent licenses to all the songs in their catalogs, rather than only the share of the rights held by the publishers and songwriters each PRO purports, respectively, to represent, will continue to hang over the industry well into 2017 and perhaps longer.

copyright-law-gavel-2016-billboard-650“While we hoped the DOJ would accept Judge Stanton’s decision, we are not surprised it chose to file an appeal.,” BMI president and CEO Mike O’Neill said in a statement. “It is unfortunate that the DOJ continues to fight for an interpretation of BMI’s consent decree that is at odds with hundreds of thousands of songwriters and composers, the country’s two largest performing rights organizations, numerous publishers and members of the music community, members of Congress, a U.S. Governor, the U.S. Copyright Office and, in Judge Stanton, a federal judge.”

The case arose from a request filed with the Justice Department in 2014 by the PROs themselves, for modifications to the consent decrees that have long-governed how ASCAP and BMI grant performance licenses to broadcasters, venues and others outlets that publicly perform live or recorded music. In response to that request — which concerned efforts by some music publishers to withdraw digital performance rights to their catalogs from the blanket licenses issued by the PROs — the DOJ initiated a review of current licensing practices. In the course of that review, the antitrust division unexpectedly broadened its focus to the question of fractional licensing in general, rather than the narrower question of partial withdrawal of digital rights.

In the end, the antitrust division decided not to grant the requested modifications for digital rights. But it concluded that the language and intent of the consent decrees had always required 100 percent licensing and that any current industry practices to the contrary would need to change. It gave publishers and the PROs one year to make whatever changes to their systems were necessary to comply.

“The Division reaches this determination based not only on the language of the consent decrees and its assessment of historical practices, but also because only full-work licensing can yield the substantial procompetitive benefits associated with blanket licenses that distinguish ASCAP’s and BMI’s activities from other agreements among competitors that present serious issues under the antitrust laws,” the department said in a statement issued at the conclusion of its review. “The Division’s confirmation that the consent decrees require full-work licensing is fully consistent with preserving the significant licensing and payment benefits that the PROs have provided music creators and music users for decades.”

The consent decrees at issue in the case date to the 1940s. They were the result of separate lawsuits brought by the Justice Department against ASCAP and BMI, alleging each organization was illegally exercising market power obtained by aggregating performance rights from nominally competing publishers and songwriters in violation of the Sherman Antitrust Act. In the course of that litigation, DOJ concluded that the blanket licenses offered by the PROs brought meaningful efficiencies to the market for performance rights by sparing music users from having to negotiate separately for the rights from tens of thousands of songwriters and publishers. The consent decrees, which settled the cases, were designed to allow music users to benefit from the efficiencies of blanket licensing while putting strict limits on how those licenses could be structured and sold.

But what if there were other ways to achieve the efficiency of blanket licensing without requiring the collective — and thus potentially anti-competitive — management of performance rights?

That’s the premise behind the National Performance Rights Exchange (NPREX), a Nashville-based startup that has built a marketplace platform it claims will enable publishers and record labels to license performance rights directly to broadcasters and digital service providers (DSPs) without going through ASCAP or BMI, and at a fraction of the cost of what the PROs charge.

The NPREX marketplace is modeled on financial exchanges like the Chicago Board Options Exchange and NASDAQ. At its heart is a pricing algorithm that takes in multiple signals from the music performance market regarding supply, demand (popularity), comparable works performance, and other data to yield pricing parameters that both licensees and licencors can use to determine the value of a song and then settle on a final, clearing price based on real-world usage data.

“The problem for publishers has always been, what is my profit-maximizing price?,” NPREX founder and CEO Lee Greer told RightsTech.com. “And in the music business, there’s a large number of other rights owners trying to do the same thing at the same time, with the same set of buyers. So it’s a very complex, interdependent thing.”

The collective licensing system resolves that dilemma, by rolling everything into a single price, but it doesn’t actually solve the problem and doesn’t necessarily maximize publishers’ profit.

“We’ve solved that problem in a way that offers an alternative to the collective licensing system, along with fractional licensing,” Greer said.

An economist and attorney, Greer is a former chief economist for BMI. “I actually proposed to BMI that they build an exchange for direct licensing,” Greer said. “They said, ‘that’s interesting,’ but I was told not to talk about it anymore.”

Greer eventually left BMI to develop an exchange himself, and built the first version of what became NPREX in 2013.

“We solved a pretty ridiculous math problem, but it’s not a new math problem,” Greer said. “It was solved by economists maybe 25 years ago and is now used in financial exchanges and is well understood. The issue was getting the right inputs to make it work for music.”

Along the way, NPREX came to the attention of the Justice Department, which interviewed Greer and his team during the course of its review of the ASCAP and BMI consent decrees. Greer will pay what he describes as a “courtesy call” on the antitrust division on Thursday (12/8) to demonstrate the NPREX platform. He will also be participating in a public meeting organized by the U.S. Commerce Department (with input from the RightsTech Project) on Friday (12/9) on Developing the Digital Marketplace for Copyrighted Works.

This week’s meeting with DOJ is not directly connected to the department’s review of the consent decrees, or to any broader review of the collective licensing system, according to Greer. “This is more about a goodwill gesture on our part,” he said in an email. “I want them to understand that is is indeed do-able to implement a systematic direct licensing mechanism that complies with copyright and antitrust law.”

NPREX is set to begin beta testing is marketplace shortly, followed by an initial capital-raise, “which I think will be fairly notable,” Greer said. “We have NDAs with several publishers and third-party data suppliers to the industry. We have the kind of support that will create an end-to-end solution.”

 

Introducing the RightsTech Project

Copyright and technology have long been intimates. The first modern copyright law, the Statute of Anne in 1709, which invested authors for the first time with rights in their own creations, emerged out of a long struggle over regulating the use of the printing press, that technological marvel of the Renaissance. Before Gutenberg, poets and philosophers could earn renown for their work but rarely material reward. It took the technology of movable type to give rise to Droit d’Auteur.

set_typeBut like a lot of long-term relationships, it’s complicated. In the 300 years since the Statute of Anne, advances in the technology of communication have vastly increased the value of authors’ rights by creating new markets for their works while at the same time challenging their legal and statutory foundations, forcing courts and legislators to make repeated adjustments to the rules of engagement.

Today, as Moore’s Law unravels the world set in place by movable type the need to reconcile rights and technology is more urgent than ever. Fortunately, just as the commercial logic of Gutenberg’s invention eventually yielded the legal concept of authorship, so, too, the algorithmic methods of digital networks could hold the key to giving renewed commercial substance to the insubstantial bits of logical data that today we call intellectual property.

Or so we at the RightsTech Project believe, which is why we’re rolling out this new, revamped website.

Why ‘Project’?

Machine-to-machine communication requires machine-readable inputs. Making even complex works of authorship machine-readable today is a routine matter, thanks to low-cost encoders and standardized formats and communications protocols. Information about that authorship however — who created it, when and where was it created, who owns it, what rights in it are they asserting, how can it be licensed for use — all-too often remains housed in non-machine readable catalogs and contracts, or gets stripped out in the hand-off,  if it was reliably recorded at all.

The problem isn’t simply a matter of missing metadata, although that’s certainly a big part of it. The problem is that the transfer of bits from machine to machine represents a transfer of value, especially when those bits refer to works of authorship. But without machine-readable means to recognize, account for, and remunerate that transfer of value the chain of interest from user back to author is broken.

Fortunately, a growing number of entrepreneurs, technology developers, and creators themselves have lately turned their attention to devising those machine readable means. Some are already in the marketplace, others still on the drawing board. But the great project of restoring and preserving the chain of value in machine-to-machine communication is now well underway.

We here at the RightsTech Project hope to provide a platform to chronicle, discuss, and advance those efforts. In addition to enhanced news coverage and analysis of rights-tech developments on this website, we will in coming weeks unveil a members-only forum where issues related to rights, technology, digital contracts and the law, registries, metadata, interoperability, and more can be discussed and debated. We will also be rolling out a full calendar of events for 2017, beginning with a full track of RightsTech panels and presentations at Digital Entertainment World in February.

We’ve also opened a dialog with policymakers in the U.S. as they wrestle with possible adjustments to copyright law. The RightsTech Project will be participating in a public meeting in Washington, DC, on December 9th hosted by the U.S. Department of Commerce on Developing the Digital Marketplace for Copyrighted Works. All RightsTech readers and supporting partners are invited to attend the free conference.

So please join us on the journey. Check back here often, sign up for our weekly newsletter to stay abreast of all the RightsTech news and analysis, send us your comments, suggestions, and thoughts as we embark on this next phase of the RightsTech Project.

 

OMI Looks to Move Into Next Phase of Development With Intel Deal

The Open Music Initiative (OMI) this week unveiled a new partnership with Intel to make the chip giant’s Sawtooth Lake blockchain technology as a reference platform for the open-source rights data architecture.

Under the arrangement, Sawtooth is expected to become a foundational platform for reference implementations of OMI’s protocols and APIs.
“We need to move from the phase of talking about things to the phase of doing something, OMI’s co-founder and director of Berklee College of Music’s Institute for Creative Entrepreneurship Panos Panay told RightsTech. “This [deal with Intel] is an important step in moving into that new phase.

According to Panay, Intel will work with OMI companies to create a “virtual sandbox” on the Sawtooth platform where developers can begin prototyping elements of OMI’s planned “minimum viable interoperability” framework.

“We have over 130 companies in OMI and it can get pretty unwieldy,” Panay said. “That’s why it was important for us to put a flag in the sand to say, we’re actually doing something.

“If this is going to work,” he added, “we have to start showing some results within the next six months or so.”

Panay said OMI had looked at a number of major technology companies as potential partners, but settled in Intel in part due to its “perceived neutrality” as a brand.

Intel is “a major technology player and a leader in the blockchain space and they bring instant credibility to what we’re trying to do,” Panay said.

Added Jerry Bautista, VP of Intel’s New Technology Group and GM of its New Business Group, “Blockchain technology offers the potential to address the rights management challenges that many industries, including music, face today. By using Sawtooth Lake as their foundational reference platform blockchain technology, OMI will be able to accelerate plans to deliver a music rights open source platform.”

Get the latest RightsTech news and analysis delivered directly in your inbox every week
We respect your privacy.