Connect with us

Commentary: Timescale has a novel licensing model that it hopes will be “open enough” to create a community. Will it work?

Image: Getty Images/iStockphoto

For decades developers would build applications using the same database tools. They might opt for Oracle over IBM’s DB2, or for an open source database like MySQL or PostgreSQL, but they were nearly always all using a relational database, speaking SQL. 

My, how times have changed.

Must-read developer content

Today developers have a smorgasbord of options from which to choose, whether document or key-value or columnar or relational or multi-model. But over the past two years, no database category has seen more growth than time series databases, something evident a year ago but now glaring in its obviousness. When I asked Timescale CEO Ajay Kulkarni why this once niche, now prevalent approach has gained in popularity, he explained it as a matter of data fidelity: “Time series is the highest fidelity of data that you can capture because it tells you exactly how things are changing over time.” 

What is less obvious is how to monetize an open source database. For this, Kulkarni has some new ideas. Let’s look at them. 

SEE: Special report: Prepare for serverless computing (free PDF) (TechRepublic)

A cornucopia of database options

As tiresome as it may have been to hit the data nail with a relational database hammer for so long, today developers have an opposite problem: There are so many options. Perhaps too many. (DB-Engines lists 359 different databases.)

Over the past two years, time series databases have exploded in popularity, relative to other options (Figure A).

Figure A

Database popularity by type

Image: DB-Engines

According to Kulkarni, as “data has exploded and is playing a more and more critical role, you want the best tool for the job.” This has led to the rise in purpose-built databases. The analogy Kulkarni used is shoes. There was a time that you’d make do with “trainers” that you might use for cycling, basketball, etc., but today, if you’re a serious road cyclist, you have special shoes for that. Different ones for mountain biking. Still different shoes for running (trail? road?), basketball, etc. For a serious athlete, you’re going to optimize your shoes for the activity. Similarly, he said, “If you have a time series workload that is mission critical for your business, why would you use something that wasn’t built for that?”

But how, I asked Kulkarni, are developers figuring out when to use a time series database? 

Developers will tend to recognize that they have a time series problem, he said. Anytime you need to understand how something changes over time, you have a time series application. A time series database “gives you a dynamic view of what’s happening across your system,” he noted, with that “system” being a software system, a physical power plant, a game, etc. Tracking how data changes over time can yield huge volumes and velocity of data, which other database types might struggle to tackle. By contrast, he stressed, “time series databases like TimescaleDB optimize for these workloads in a way that allows you to get orders of magnitude better performance and scalability at a fraction of the cost.”

Given the increasing attention that developers are paying to open source time series databases like TimescaleDB, InfluxDB, and others, the question for Kulkarni has become one of ensuring his company can capitalize by turning this interest into revenue which, in turn, can fund more investment and innovation. The key for TimescaleDB has been to figure out how to retain the benefits of open source (e.g., community) without sacrificing the company’s ability to fund operations.

Source available, community ready

Timescale has long relied on an open core business model, wherein the company offered the bulk of its code under an open source license (Apache 2.0), making other, advanced components available under a proprietary license. Though open core has been a common model for commercial open source, it has plenty of problems (which I and others have called out). One primary problem is that it blocks community from those advanced features, and community, more than anything else, is what fuels open source adoption.

So Timescale put a new spin on open core: It made 100% of its code source available. That Apache 2.0-licensed core would remain under this license. But the advanced functionality will now be available under a source available license that allows for developers to do pretty much anything they might want, except build a competing database-as-a-service offering. (However, anyone can build a cloud service from the Apache 2.0-licensed core, which DigitalOcean and others have.)

Is it open source? No, it’s not. Will that matter? Kulkarni believes that Timescale’s approach is open in essential ways that will allow Timescale’s community to flourish:

As developers, we like to inspect the code we’re using. Even if we don’t [actually look at source code], we’d like the ability to do it….[A]s a company,…we like the public open source development process. It’s like the things you see in GitHub. It’s public issues, public [pull requests], public commentary. People can see things on the open. And so the goal of our Timescale license was to make the proprietary stuff more open, make it more like open source so that we can work more closely with our community out in the open. Another advantage of having a single code base is that it allows us to develop new features faster instead of having to invest energy in keeping two different repos and stuff.

It’s hard to know exactly The Right Thing To Do™ with an open source business these days as cloud and open source co-mingle. FaunaDB has eschewed open source altogether, relying instead on the richness of its data API to entice developers. Chef and Yugabyte both dropped open core in favor of 100% open source approaches. RackN, on the other hand, decided to make its core proprietary, and open sourced the rest. Companies are experimenting.

But couldn’t Timescale simply open source everything under Apache 2.0 and compete on the basis of offering superior operation of that software, I asked? Sure, said Kulkarni, “We’ll be better operators of TimescaleDB than…anyone else ever will, for a variety of reasons. We know the code base, we can write patches and updates faster than anyone else,” etc. And yet the cloud providers have other advantages that make it a competitive mismatch, he concluded.

It will be fascinating to see how this plays out. The popularity of time series databases like TimescaleDB isn’t in question. The popularity of Timescale’s business model remains to be seen. Has the company reserved enough of the benefits of open source in its source available license to capture the convenience and community of open source? Time (no pun intended) will tell.

Disclosure: I work for AWS, but the views herein are mine and not those of my employer

Also see

Source link

Continue Reading


DKK 42 million for sustainable chip-based spectrometers

technology org default image

In a new four-year Grand Solutions project—supported by Innovation Fund Denmark with DKK 25 million—DTU and four companies will join forces in a consortium called NEXUS to develop the next generation of ultracompact spectrometers based on chip technology:

“We will quite simply make spectrometers in a radically different way that will make them both inexpensive and sustainable,” says the originator of the new Innovation Fund Denmark project, Associate Professor Søren Stobbe from DTU Fotonik. He continues:

“In NEXUS, we will develop the nanotechnology and the chip technology, as well as the modules that will be used to integrate the spectrometers in the industry already during the project. In short, we will make it possible to perform measurements in places where you cannot measure today. And because we can make the spectrometers small and inexpensive, it can also be good business for companies to choose the most environmentally friendly solution.”

Spectrometers to reduce waste at dairies
To begin with, NEXUS’ spectrometers will make a difference for dairies.

Dairies need spectrometers to measure the contents of protein, fat, and water in their milk. But the spectrometers currently available on the market are large and expensive, which means that the dairies only have a very limited number of them. So when, for example, the dairies are to produce a new batch of semi-skimmed milk, they rinse the pipes with milk to be sure of what they have in the pipes. This means that they send around 10,000 litres of milk directly into the sewers every day. This could be avoided if spectrometers were instead installed to measure what is in the pipes.

Jacob Riis Folkenberg—Vice President of Technology at FOSS, which makes food production equipment—is therefore convinced that the new optical spectroscopy technology has the potential to revolutionize the market:

“In addition to being a waste of time and energy, the 10,000 litres of milk going to waste every day also has a fairly high market value.  If you can get the price of a spectrometer down, this will quickly turn into a really good business case for the dairies. We estimate that there is a market potential of three billion Danish kroner at the dairies alone,” he says.

The core of the NEXUS project is DTU’s patented chip technology.

“We have a prototype that works, but we don’t yet have the spectral resolution we need,” says Associate Professor at DTU Fotonik, Søren Stobbe, and continues:

“We need to develop a lot of stuff in the chip, and it must then be built into the whole technology that surrounds it. For it’s one thing to make a chip. But—in reality—a large part of the work is to integrate the chip with the surroundings.”

While DTU Fotonik is responsible for the development of the chip, the companies Beamfox Technologies ApS and ELIONIX INC will develop methods for nanofabrication of the chip. Ibsen Photonics A/S makes the modules in which the chip will be integrated, and FOSS makes the food production probes in which the modules will be installed and which can be used at the dairies.

Wind turbines, aircraft, and health monitoring on the mobile
The NEXUS project starts with the dairies, but the technology will also be relevant in many other contexts.

“The ultimate vision is to be able to make spectrometers so small and inexpensive that it can, for example, be worthwhile to build them into mobile phones. The spectrometer will be able to make a kind of primitive blood test, which could give you an indication of whether you need to see your doctor,” says Søren Stobbe.

“Another example is so-called optical interrogation monitors, which can be used to measure and predict the behaviour of large mechanical structures. They can be built into a bridge, a wind turbine blade, or an aeroplane wing, where they will then monitor whether the material begins to give off some strange vibrations. The area of application for spectrometers—if you can make them in this low price range—is gigantic.”

Source: DTU

Source link

Continue Reading


Tinder Expands Its In-App Face-to-Face Video Chat Feature Globally [Update]

tinder video chat feature 1 1603859517552

Tinder is expanding its in-app video chat feature globally for all users. After testing the feature in multiple countries, the popular dating app is now globally rolling out its ‘Face to Face’ feature that lets users video call each other through the app itself. Users can video call potential partners without having to rely on a third-party video service or share other contact details. The Face to Face feature will only work if both the parties have opted in.

Announced earlier this year, the Face-to-Face feature was initially available only to iOS and Android devices in select markets, but is rolling out for users across the world now, as per reports. Tinder’s Face-to-Face feature is rolling out to cities across the US and UK, along with Brazil, Australia, Spain, Italy, France, Vietnam, Indonesia, Korea, Taiwan, Thailand, Peru and Chile, and the exact timeline for reaching other markets has not been announced yet.

Update: Gadgets 360 was informed by Tinder India on Friday that the video calling feature is now available in the country.

Tinder’s new Face-To-Face video chat feature aims to make dating from home simpler. It is a helpful feature especially during the pandemic, with meeting people often coming with an added risk.

This otherwise interesting feature does come with the risk of it being misused. Users can report a match if needed by navigating to the match’s profile and scrolling down to click on Report. They can then follow instructions given on the screen.

How to use Tinder’s Face-to-Face feature

To opt in for the Face-to-Face feature, navigate to the match’s’ messages, and tap on the video icon on the top of the screen. Slide the toggle to the right to unlock Face to Face. After both parties have unlocked the feature, you’ll see a confirmation message in the app. After that, tap the video call icon at the top of the chat screen with the match. You’ll get a live video preview, after which you can tap on Call.

An on-going video call can be ended by clicking on the red End button. If a match calls you but you don’t want to accept the call, you can decline the call in the app itself, or let it ring. The match will be notified that you’re currently available.

Are iPhone 12 mini, HomePod mini the Perfect Apple Devices for India? We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts, Google Podcasts, or RSS, download the episode, or just hit the play button below.

Source link

Continue Reading


How to install the FreeIPA identity and authorization solution on CentOS 8


Jack Wallen walks you through the process of installing an identity and authorization platform on CentOS 8.

Image: CentOS

FreeIPA is an open source identity and authorization platform that provides centralized authorization for Linux, macOS, and Windows. This solution is based on the 389 Directory Server and uses Kerberos, SSSD, Dogtag, NTP, and DNS. The installation isn’t terribly challenging, and you’ll find a handy web-based interface that makes the platform easy to administer.

I’m going to walk you through the steps of getting FreeIPA up and running on CentOS 8. 

SEE: CentOS: A how-to guide (free PDF) (TechRepublic) 

What you’ll need

How to set your hostname

The first thing you must do is set your hostname. I’m going to be demonstrating with a LAN-only FQDN (which then must be mapped in /etc/hosts on any client machine that wants to access the server). 

Set your hostname with the command:

sudo hostnamectl set-hostname HOSTNAME

Where HOSTNAME is the FQDN of the server.

After you’ve set the hostname, you must add an entry in the server’s hosts file. Issue the command:

sudo nano /etc/hosts

Add a line at the bottom like this:


Where SERVER_IP is the IP address of the server and HOSTNAME is the FQDN of the server.

Save and close the file.

How to install FreeIPA

The installation of FreeIPA starts with enabling the idm:DL1 repository with the command:

sudo module enable idm:DL1

When that command completes, sync the repository with the command:

sudo dnf distro-sync

Install FreeIPA with the command:

sudo dnf install ipa-server ipa-server-dns -y

How to set up FreeIPA Server

Next you have to run the configuration script for FreeIPA Server. To do that, issue the command:

sudo ipa-server-install

The first question you must answer is whether or not you want to install BIND for DNS. Accept the default (no) by pressing Enter on your keyboard. You must then confirm the domain and realm name, which will both be detected by the script. Once you’ve confirmed those entries, you’ll need to set a directory manager password, an IPA admin password for the web interface, and then accept the default (no) for the installation of chrony. 

After you’ve taken care of the above, you’ll be presented with the details of your installation (Figure A).

Figure A


The details of my installation of FreeIPA Server.

Type y and hit Enter on your keyboard. The configuration will begin. This does take a bit of time, so either sit back and watch the text fly by or set about to take care of another task.

When the configuration completes, you’re ready to continue on.

How to access the web interface

Open a browser and point it to https://SERVER_IP (where SERVER IP is the IP address of the hosting server). You should be prompted for a username and password (Figure B). The username is admin and the password is the one you set for IPA admin during the configuration. 

Figure B


The FreeIPA login screen.

Upon successful login, you’ll find yourself at the FreeIPA main window, where you can begin managing your centralized authentication server (Figure C).

Figure C


The FreeIPA main window is ready to work.

And that’s all there is to getting FreeIPA installed on CentOS. You can now spend some time adding users and other bits to make your identity and authorization solution work for your business.

Also see

Source link

Continue Reading