The COVID-19 pandemic has accelerated the blending of data analytics and DevOps meaning developers, data scientists, and product managers will need to work more closely together than ever before.
The COVID-19 pandemic has accelerated the blending of data analytics and DevOps meaning developers, data scientists, and product managers will need to work more closely together than ever before. In this episode of TechRepublic’s
, host and TechRepublic editor-in-chief Bill Detwiler speaks with Michael O’Connell Ph.D., chief analytics officer at TIBCO Software about how data analytics is merging with DevOps, the data science work his company has done helping organizations respond to the COVID-19 pandemic, and what that works tells them about the future of software development and analytics. The following is a transcript of the interview, edited for readability.
Bill Detwiler (00:16): All right. So before we get started talking about data analytics and DevOps, and how the current COVID-19 pandemic is affecting that, give me a little rundown on TIBCO Software. You all specialize in data analytics. Tell me about what you do.
Michael O’Connell (00:34): Absolutely. Yeah, so the TIBCO Connected Intelligence platform, it’s got three pillars. There’s the connect pillar, API-led microservices, real time data integration. The unified pillar is where we virtualize data, we model data, master data, metadata, reference data, and create virtualized views on multiple source systems. Primarily data rest, but real time data sources as well. And then feeding that into the analytics lab, which includes our visual analytics, data science, and streaming analytics. Three pillars to the platform and, most business problems we can cover by combining those building blocks and creating analytic applications.
Bill Detwiler: And you work on everything from retail to the COVID-19 pandemic, all sorts of data science questions, right?
Michael O’Connell: Yeah, absolutely. As chief analytics officer, I am customer facing, it’s an offensive position. But I’ve also got one foot in the product teams where I help drive the input for the product evolution based on what I’m seeing in the field. So yeah, we work heavily in the finance sector, energy manufacturing, retail, consumer goods, telco, travel, transportation, logistics. All of those industries have been affected in various ways by COVID-19. I’m sure we’ll get into that.
Chief Analytics Officer: Using data science and analytics to transform businesses
Bill Detwiler: So I’m really interested to hear a little bit more about your role as chief analytics officer. A lot of times, we hear the CXO moniker applied in technical positions these days, because there are so many variants, everything from a CSO, CTO, CIO. So, talk a little bit about what the chief analytics officer role is.
Michael O’Connell: Yeah. So it’s like I was saying, it’s a one foot in the customer and one in the product. And it’s also building out the ecosystem a little bit. So, customer innovation, community, but, I meet with a lot of customers, all our marquee customers and bigger accounts, help them figure out their digital strategy and so on, and really trying to understand what are the data sources, what are they trying to get done? Where’s their opportunity to create value with digital transformation initiatives that are driven by analytics and data science? And then those sort of learnings, I bubble back to the product teams where I’ve joined at the hip with a lot of the product managers, tell them what I’m seeing, as we figure out the next releases of the product, what’s going to really create, move the needle for our customers and their businesses, create value, with our software. So it really is a fun job, to help our customers use data science and analytics to transform their businesses and then use that to transform our software, to generate that value for our customers.
Bill Detwiler: I think that’s something that’s really interesting. What you said was around helping the customers unlock their data, or understand what they are trying to do with data. Because, when I talked to data analytics folks, one of the things they say is a lot of times, they go out to companies, and the companies have data, or they think they have data, or they want to collect data, but they don’t necessarily know what question they’re trying to answer with the data. And that’s really the first place to start. It’s not, “Let’s collect everything and then figure out how we sort through it later.” “Let’s figure out the problems we’re trying to answer first, and then design a system that helps us collect the data we need to answer that question, and then can help us make the decisions based on that answer.” So, how difficult is it to help companies or, talk a little bit about that role. When you walk into a company and they say, “Look, we want to do X.” How much of a challenge for you is it to help them shape, what they want to do with data?
Michael O’Connell: Yeah. So, like you say, you’re going to focus on the high value business problems where you can really, generate revenue to the bottom line or take out costs, improve productivity, manage risk. One of the big initiatives that the executive leadership team has for the company, and how can analytics play a big role in generating value around those initiatives. That’s kind of the starting point. And in some industries, the value calculations can be ginormous. When you’re thinking about, say energy sector being depressed at the moment, but that’s a time when you can optimize. So how do you get the most out of different production facilities? If you can minimize downtime on a high producing asset, that’s like the value calculation is off the charts.
In the current climate, say in the law 48, at least, when a well starts producing, do you even bother bringing it back up? How’s the strategy different for, law 48 versus offshore? And, the national oil companies run their business very differently. They’re providing energy to their country, which is different than taking a profit out of a shale oil well, which is, less profitable at the moment. So, everybody’s got their own business objective. How do we optimize that with analytics, is the challenge. And in post or in the current COVID world, retail, CPG, these industries are really transforming right in front of our eyes. And analytics is a big deal for optimizing all of the aspects of those businesses, supply chain, all that kind of stuff.
SEE: Feature comparison: Data analytics software, and services (TechRepublic Premium)
COVID-19: Navigating a pandemic with data science
Bill Detwiler: Well, let’s talk about that, because TIBCO has done some interesting work around COVID-19. And looking at, done some modeling around the virus spread, trying to help your customers, help people be able to reopen, get back in business, figure out how this is going to affect their businesses. Tell me about some of the initiatives that TIBCO has undertaken in these last few months.
Michael O’Connell: Yeah, for sure. Well firstly, it’s important to realize that, the COVID or the virus is hidden from us. Data science and analytics becomes hugely important in finding out, what the hell is going on out there. It’s like, what you see right now happened two to three weeks ago. Or, what you’re looking at right now is going to show up in two, three weeks. It’s kind of like a Twilight Zone episode. You’re predicting the future, but the future is actually right here right now. It’s odd. It’s like this weird time delay. And so, we’ve got, in the work we’ve been doing analyzing the COVID cases worldwide, we’ve got models that are predicting, the reproduction number of the virus, and all these spikes that are occurring in Florida and all that, we knew about those three weeks ago. We knew that was going to happen. But exponential growth is tricky for people to get, especially when it’s hidden.
It’s like there’s a delay between when you get infected and you’re infectious until you get symptoms and you’re infecting all these other people. So if you think about, two to the seventh, it was 128. Two to the 14th, 16,384. And so you think, “Oh, I’ve only got 128 cases.” I’m making an analogy here obviously, but the way that it is hidden and exponentially growing, all of a sudden, you’ve got these problems on your hand, but we can predict that. We’ve got models that are predicting the reproduction of the virus. There’s lots of problems with the data as well. It’s got a lot of reporting artifacts. Mondays, a lot of cases are reported. The weekend, not so much. There’s, lots of errors and artifacts in the data.
So smoothing that out and fitting models to actually find the signal in a noisy landscape, another important data science contribution. So, in dealing with the data, we’ve got a lot to offer. And then, we mentioned CPG and retail, how do you predict a retail business when traditionally that’s done with same store sales, and there’s no store sales? So it’s getting the eCommerce data, combining it with what limited in-store sales you might have, and then starting to understand, which type of stores might get sales. It’s not going to be indoor malls. It’s not going to be stores that are in tourist-driven areas, there’s little tourism. So how do you start to understand the attributes of the stores and, where revenue might come from and how to maximize the eCommerce revenue?
There’s also, the ad spend media landscape’s changed, as people are moving their ad spend to channels like Hulu, and away that everybody’s watching, and away from more traditional channels. And we are seeing spikes in eCommerce revenue. So, lots of people buying lounge wear and, floral pink pajamas and stuff. But, that’s some of the oddities we’ve found in data mining the sales, but in general, eCommerce sales are going through the roof. And how do you then manage the store re-openings and predict the business, when all of that is, the sands are shifting under our feet?
Blending data science and DevOps to enable digital transformation
Bill Detwiler: And I think something that’s also really interesting that you touched on a little earlier and we talked a little bit about before the show, which was, this pandemic came on quickly when it did come on. Not to talk about, you can debate how soon people should have known or did know about it. But, from a perspective of the shutdown, companies had to react very rapidly to the changing business climate, to a changing workforce. And, you talked about some of the ways that TIBCO, your team, had to quickly spin up some of these efforts. You were talking about how you had data scientists, data analytics folks, really having to rush into DevOps, and kind of try to, “Okay, how can we spin these processes up? How can we spin these services up? How can we spin the analysis up very quickly?”
Bill Detwiler: And vice versa, you have companies that were trying to react really quickly to this pandemic, and DevOps folks saying, “Oh, well we need to integrate data science into our processes, into the apps that we’re building. We have people asking us for tools to help them address these challenges, and we need data in the app.” And that creates this back and forth a little bit between DevOps and data science. Talk a little bit about that, from your perspective. From working there at TIBCO and doing this and what you’re seeing with your customers.
Michael O’Connell: Yeah, it’s been fascinating, the merging of data science and DevOps. So to your point, when we started working with the publicly available data from, Johns Hopkins, Our World in Data and so on. We brought it together, started creating analytic apps, and then we’re like, “Well, a lot of our customers want to see this.” And so then we started getting into the DevOps side, but that’s a whole ‘nother world. And the data scientist, traditionally it does a handshake with somebody else, but no, we had to move quickly. So we started up our own servers. We ended up with, a traditional DevOps, you got a blue server and a green server, you’re rapidly innovating on the green server, the blue server’s in production. You flip them out.
And, we built our own little, inside of our data science team, we built a little DevOps team to actually do that. And then we found that, the Hopkins data and other data was pretty limited. So we started going getting data from all these other places. It’s, amazing number of data sources out there. We’ve got the Google mobility data, we’ve got the COVID tracking data. We’ve got unemployment data in there that some of our customers are asking for. We’ve just got a ton of data now that is in the application from all these multiple sources. So we’re federating that, and, in a Postgres database, putting our analytics apps on top of that, we’re bringing in other data sources through our data virtualization layer, providing data services. It’s become this massive operation with lots of different data and analytics and up into our live hosted app that’s refreshing multiple times per day.
And then, we got into, we started collaborating with these scientists to stop COVID around how to reopen society. And then our DevOps and engineering team under the office of the CTO got involved. And we built this application, TIBCO GatherSmart, to help bring people back together in a safe way. So that became a focus. We built a control center and a mobile app for people to do symptom tracking. You can sit in the control center, figuring out who’s going to get what survey. They answer the questions, you decide if you’re going to give them a QR code to come into a building based on their self-reported symptoms or hotspot, local hotspots. And so this became a DevOps-led project to create the cloud-based application and the mobile app for doing the surveys, and onboarding the employees and the university students, and so on.
That’s become quite popular too. So we’re about to launch that in July, this TIBCO GatherSmart. But, we wanted to have a bunch of analytics and data science in that, for the control center that the employer is looking out across their collection of employees, and so on. And so then we were the data science fairy dust into that DevOps-led initiative. So we’ve seen both sides of it at TIBCO. Data scientists becoming DevOps savvy people, and DevOps people becoming data scientists, and then the boundaries are blurring at TIBCO, and I’m seeing that, in the rest of the world too. It’s where the fast forward button has been pressed, Bill. It’s crazy.
SEE: 60 ways to get the most value from your big data initiatives (free PDF) (TechRepublic download)
Building data science into the application development process
Bill Detwiler: Yeah. I think that’s one of the things that we’ve talked a lot about is the acceleration. A lot of these digital transformation efforts were already underway, but they’ve been time-compressed because of the COVID pandemic. So what lessons did you learn that maybe other organizations, who are working on applications and systems and products, and their engineers are working on them right now, the product managers are working on the right now. If you want to incorporate analytics into those applications from the get-go, what are your recommendations to them based on your experience?
Michael O’Connell: Yeah, so we found that, certain people on either side of that, fans if you will, have a desire and passion to learn the other side. And it’s, be a learn-it-all, not a know-it-all kind of thing. And so, certain folks on the data science team got fascinated with, “How do I make this, low latency, high throughput, high response. How do I operationalize this stuff?” And, they sort of stepped into those roles and, as a learning experience. And then similarly on the DevOps engineering side of the house, some people, they were like, “Well how do I make this control center app cooler? And how do we bring in data and present, multilayered analytics views for consumers of that, and getting fascinated on the data science side?”
But you do need a bit of both. The data scientist wants to get the DevOps stuff going, needs to be able to go to somebody and ask them, “How do I do this? Can you help me do this?” And vice versa on the engineering side, injecting data scientist’s perspective into that world, and the cross-fertilization back and forth. It’s been really cool and people have gotten to learn a lot outside of their own comfort zone.
Bill Detwiler: Yeah, and what type of tools did you all use internally to facilitate that type of communication? And I guess practices as well. So, is it happening during weekly stand ups, is it happening during the product design phase? How’s that work, and what are some learnings that maybe some things that didn’t work really well or things that, did work really well? So, to make that cross pollination happen?
Michael O’Connell: Yeah, yeah. Well, it sort of started with a Slack channel that we set up around the pandemic. And, back in late February, early March, that Slack channel got really popular. And I think we’ve still got, I don’t know, 500 to a thousand people on the main COVID Slack channel. And that week, I sort of put those people to work as well. Cause as we built data around government interventions at a local level, the people in the channel were all from around the world. And everybody started chipping in, we designed a certain format about the metadata around government interventions, whether it was school closures or shelter in place, or whatever it was, we had a taxonomy then, and people went to their local regions and started filling it in. And we’ve actually used that metadata to annotate all of the analysis, and that’s become quite important as things have, interventions have been lifted, but then in some places put back on, and now you can see the changes in the reproduction number or the hotspots with that metadata, lay it onto it, that’s been literally crowdsource collected around the TIBCO worldwide team.
So the Slack channel has got a lot of enthusiasm going. And as we started to release, the Spotify live report, and the data sources started to get added and everybody was like, “Well, can you put in this country?” And so I said, we started with Hopkins, but now we’ve got most countries in the world where we’ve actually gone out and got the data ourselves from different department of health sites and so on, and assembled pretty broad coverage at a very local level, across worldwide countries. And that’s been driven by that internal enthusiasm around the Slack channel.
Now, as the projects got stood up, we had daily stand ups, we had weekly leadership team stand ups, and we’ve got that, on the live report analytics app for COVID. And we’ve also got that on the GatherSmart project. And those projects have now merged a little bit, and we’ve been able to cut back some of the daily stand ups, but we’ve got very regular meetings on both the projects and also a group that’s working on the confluence of the two. So Slack’s been important, I guess, the usual standup meetings, and then finding these little target teams of people who are really enthusiastic to learn each other’s world, and collaborate.
Bill Detwiler: Is that something that TIBCO had focused on, before you started addressing the COVID-19 pandemic, which is sort of that cross-functional development. Sort of as part of just normal workforce development, encouraging people to maybe push outside their comfort zone, and also providing them time and the resources to do that. Was that just part of TIBCO’s DNA?
Michael O’Connell: Great question. Because, some years ago, we had this next big thing concept at TIBCO, and AI was one of the next big thing topics. And so we had a lot of our product teams were challenged on, “What are you going to do when it comes to AI?” And we have a philosophy of AI being a foundational element of the dev process. And so, you think about the three things that drive us, cloud native is one, AI as a foundation is another one. And these foundational elements are driven from the top down. So, every product manager has to have a better together strategy of the different TIBCO products working together. They have to have something on, “What’s your AI foundational aspect of this product? What’s the cloud native part, and then what’s the open source, embracement part of it?”
So the three guiding principles are infused through our product. But I guess that the AI part is the bit that, has made the data science groups work together with the integration products, because it’s just been a natural top down led initiative is, let’s bring AI into all the products at a foundational level, that’s been going on for a few years. And that has started some years ago, this collaboration between data science and DevOps. But, like you said, the COVID thing has just accelerated that. We built this GatherSmart app in like six, eight weeks. And the same thing with the live report on COVID. That just was grown from the ground up in a matter of weeks. Everybody’s just working hard and having fun.
SEE: Coronavirus: Critical IT policies and tools every business needs (TechRepublic Premium)
Data science is becoming highly democratized and merging with visual analytics
Bill Detwiler: Yeah. Where do you think we’re going to go from here when it comes to data analytics, and sort of incorporating that into almost everything? We’ve all become, I think, data analytics experts, or not experts, but at least conversant in the topics. I think with the COVID-19 pandemic, with the daily briefings that people get, we’ve learned about terms like flattening the curve, and we’ve learned about exponential growth. So, for someone like yourself or someone like me who was in a previous life was a research scientist, the DB administrator, I worked in social science research.
So I love anything with data and analytics and being able to tell a good story with it. But a lot of people, you can get lost in the data. A lot of people can glance over it. Sometimes data can be used in ways to tell it to support a particular point of view, so people discount it. But I think now, it’s in the mainstream, and there is an effort to make data science and analytics and especially visualizations, more prominent and hopefully society in general, and then hopefully people will make educated decisions based on some of this. Where do you think we’re going from here? So is it, I’m assuming it’s increased acceleration, but what do you see on the horizon for analytics?
Michael O’Connell: Well, the world of data science and visual analytics have just collided and merged. And then they’ve also become highly democratized to your point. A turning point for me was when Governor Cuomo started tweeting out, “We got to get our effective reproduction number down to 0.8 to bring this under control.” I’m like, “Dude, that’s amazing!” I retweeted that, and I was like, I mentioned to you earlier, we’ve had tools in a live report that estimate the reproduction number at a very local level, at the county level and so on. We’ve been tracking the spread of the virus. We saw Michigan before it erupted. Most recently we saw Arizona, Florida, you can see it, we’re predicting the spread. The effective reproduction number is a function of time. But I thought it was a real geeked out thing.
And then Cuomo was tweeting about it, that he’s going to get it down to 0.8. And it’s like, data science is in your living room every night. And it’s just amazing that, you don’t have to spend time building the context of a conversation you want to have with a customer about data science. It’s already there. People are seeing it every night in their living room. So, it’s just been fantastic for me as a data scientist to see how democratize widespread, data science has become, and everybody is becoming a data scientist, it’s awesome.
Bill Detwiler: Yeah. And it’s sad, that it took something as tragic as the COVID-19 pandemic to do that. You and I, people who have been around data and analytics and systems like this for so long, you would be much happier going back to doing retail analysis. It’s just, it is, it’s heartbreaking that it’s something that it’s so tragic, that has brought this to the forefront, but at least it can be a turning point to help people understand how important analytics and data, and science is to sort of making effective decisions. Well Michael, I really appreciate you taking the time to be here with us again. It’s been a great conversation.
Michael O’Connell: Well let’s get back together sometime soon. I mean, we’re only six months into this year, and we’ve had a global pandemic, we’ve had a racial uprising and an economic meltdown. So, Bill let’s get back together later in the year and see where we stand at the end of the year.
Bill Detwiler: Hopefully things will be better, fingers crossed. It can be much better. All right Michael. Thank you again.
Michael O’Connell: Yeah, cheers.
Get more Dynamic Developer
New technology from Stanford scientists finds long-hidden quakes, and possible clues about how earthquakes evolve
Measures of Earth’s vibrations zigged and zagged across Mostafa Mousavi’s screen one morning in Memphis, Tenn. As part of his PhD studies in geophysics, he sat scanning earthquake signals recorded the night before, verifying that decades-old algorithms had detected true earthquakes rather than tremors generated by ordinary things like crashing waves, passing trucks or stomping football fans.
“I did all this tedious work for six months, looking at continuous data,” Mousavi, now a research scientist at Stanford’s School of Earth, Energy & Environmental Sciences (Stanford Earth), recalled recently. “That was the point I thought, ‘There has to be a much better way to do this stuff.’”
This was in 2013. Handheld smartphones were already loaded with algorithms that could break down speech into sound waves and come up with the most likely words in those patterns. Using artificial intelligence, they could even learn from past recordings to become more accurate over time.
Seismic waves and sound waves aren’t so different. One moves through rock and fluid, the other through air. Yet while machine learning had transformed the way personal computers process and interact with voice and sound, the algorithms used to detect earthquakes in streams of seismic data have hardly changed since the 1980s.
That has left a lot of earthquakes undetected.
Big quakes are hard to miss, but they’re rare. Meanwhile, imperceptibly small quakes happen all the time. Occurring on the same faults as bigger earthquakes – and involving the same physics and the same mechanisms – these “microquakes” represent a cache of untapped information about how earthquakes evolve – but only if scientists can find them.
In a recent paper published in Nature Communications, Mousavi and co-authors describe a new method for using artificial intelligence to bring into focus millions of these subtle shifts of the Earth. “By improving our ability to detect and locate these very small earthquakes, we can get a clearer view of how earthquakes interact or spread out along the fault, how they get started, even how they stop,” said Stanford geophysicist Gregory Beroza, one of the paper’s authors.
Focusing on what matters
Mousavi began working on technology to automate earthquake detection soon after his stint examining daily seismograms in Memphis, but his models struggled to tune out the noise inherent to seismic data. A few years later, after joining Beroza’s lab at Stanford in 2017, he started to think about how to solve this problem using machine learning.
The group has produced a series of increasingly powerful detectors. A 2018 model called PhaseNet, developed by Beroza and graduate student Weiqiang Zhu, adapted algorithms from medical image processing to excel at phase-picking, which involves identifying the precise start of two different types of seismic waves. Another machine learning model, released in 2019 and dubbed CRED, was inspired by voice-trigger algorithms in virtual assistant systems and proved effective at detection. Both models learned the fundamental patterns of earthquake sequences from a relatively small set of seismograms recorded only in northern California.
In the Nature Communications paper, the authors report they’ve developed a new model to detect very small earthquakes with weak signals that current methods usually overlook, and to pick out the precise timing of the seismic phases using earthquake data from around the world. They call it Earthquake Transformer.
According to Mousavi, the model builds on PhaseNet and CRED, and “embeds those insights I got from the time I was doing all of this manually.” Specifically, Earthquake Transformer mimics the way human analysts look at the set of wiggles as a whole and then hone in on a small section of interest.
People do this intuitively in daily life – tuning out less important details to focus more intently on what matters. Computer scientists call it an “attention mechanism” and frequently use it to improve text translations. But it’s new to the field of automated earthquake detection, Mousavi said. “I envision that this new generation of detectors and phase-pickers will be the norm for earthquake monitoring within the next year or two,” he said.
The technology could allow analysts to focus on extracting insights from a more complete catalogue of earthquakes, freeing up their time to think more about what the pattern of earthquakes means, said Beroza, the Wayne Loel Professor of Earth Science at Stanford Earth.
Understanding patterns in the accumulation of small tremors over decades or centuries could be key to minimizing surprises – and damage – when a larger quake strikes.
The 1989 Loma Prieta quake ranks as one of the most destructive earthquake disasters in U.S. history, and as one of the largest to hit northern California in the past century. It’s a distinction that speaks less to extraordinary power in the case of Loma Prieta than to gaps in earthquake preparedness, hazard mapping and building codes – and to the extreme rarity of large earthquakes.
Only about one in five of the approximately 500,000 earthquakes detected globally by seismic sensors every year produce shaking strong enough for people to notice. In a typical year, perhaps 100 quakes will cause damage.
In the late 1980s, computers were already at work analyzing digitally recorded seismic data, and they determined the occurrence and location of earthquakes like Loma Prieta within minutes. Limitations in both the computers and the waveform data, however, left many small earthquakes undetected and many larger earthquakes only partially measured.
After the harsh lesson of Loma Prieta, many California communities have come to rely on maps showing fault zones and the areas where quakes are likely to do the most damage. Fleshing out the record of past earthquakes with Earthquake Transformer and other tools could make those maps more accurate and help to reveal faults that might otherwise come to light only in the wake of destruction from a larger quake, as happened with Loma Prieta in 1989, and with the magnitude-6.7 Northridge earthquake in Los Angeles five years later.
“The more information we can get on the deep, three-dimensional fault structure through improved monitoring of small earthquakes, the better we can anticipate earthquakes that lurk in the future,” Beroza said.
To determine an earthquake’s location and magnitude, existing algorithms and human experts alike look for the arrival time of two types of waves. The first set, known as primary or P waves, advance quickly – pushing, pulling and compressing the ground like a Slinky as they move through it. Next come shear or S waves, which travel more slowly but can be more destructive as they move the Earthside to side or up and down.
To test the Earthquake Transformer, the team wanted to see how it worked with earthquakes not included in training data that are used to teach algorithms what a true earthquake and its seismic phases look like. The training data included one million hand-labelled seismograms recorded mostly over the past two decades where earthquakes happen globally, excluding Japan. For the test, they selected five weeks of continuous data recorded in the region of Japan shaken 20 years ago by the magnitude-6.6 Tottori earthquake and its aftershocks.
The model detected and located 21,092 events – more than two and a half times the number of earthquakes picked out by hand, using data from only 18 of the 57 stations that Japanese scientists originally used to study the sequence. Earthquake Transformer proved particularly effective for the tiny earthquakes that are harder for humans to pick out and being recorded in overwhelming numbers as seismic sensors multiply.
“Previously, people had designed algorithms to say, find the P wave. That’s a relatively simple problem,” explained co-author William Ellsworth, a research professor in geophysics at Stanford. Pinpointing the start of the S wave is more difficult, he said, because it emerges from the erratic last gasps of the fast-moving P waves. Other algorithms have been able to produce extremely detailed earthquake catalogs, including huge numbers of small earthquakes missed by analysts – but their pattern-matching algorithms work only in the region supplying the training data.
With Earthquake Transformer running on a simple computer, analysis that would ordinarily take months of expert labor was completed within 20 minutes. That speed is made possible by algorithms that search for the existence of an earthquake and the timing of the seismic phases in tandem, using information gleaned from each search to narrow down the solution for the others.
“Earthquake Transformer gets many more earthquakes than other methods, whether it’s people sitting and trying to analyze things by looking at the waveforms, or older computer methods,” Ellsworth said. “We’re getting a much deeper look at the earthquake process, and we’re doing it more efficiently and accurately.”
The researchers trained and tested Earthquake Transformer on historic data, but the technology is ready to flag tiny earthquakes almost as soon as they happen. According to Beroza, “Earthquake monitoring using machine learning in near real-time is coming very soon.”
Source: Stanford University
Google Pixel 4a Review | NDTV Gadgets 360
After the disappointing launch price of the Pixel 3a in India last year, and the decision to not launch the Pixel 4, there has been little reason to get excited about the new models launched this year. Google is not launching the Pixel 5 or the Pixel 4a 5G in India, at least not yet, but it has launched the Pixel 4a. The most affordable member of this year’s Pixel series, the Pixel 4a is priced a lot more aggressively this time around in India, at Rs. 31,999.
This year, Google is keeping things simple. There’s just one version of the Pixel 4a, so no XL option. It’s also available in only one configuration, with 6GB of RAM and 128GB of storage, and in only one colour – Just Black. I have spent a lot of time with it following my initial impressions a few weeks ago, and now it’s time to see if Google has done enough this year to get people interested again.
Google Pixel 4a design
There’s something very likeable about the Google Pixel 4a’s design. It’s not flashy or in-your-face; in fact it’s the exact opposite and yet it looks attractive. Google has used a unibody polycarbonate shell with a soft-touch matte finish. It looks nice and doesn’t attract fingerprints. The Pixel 4a is relatively slim at 8.2mm and really light, at just 143g. The overall compact dimensions of the body and the rounded edges make it a very comfortable phone to handle.
The volume and power buttons are placed on the right, and offer good tactile feedback. There’s a headphone jack on the top, a tray for a single Nano-SIM on the left, and the speaker and USB Type-C port on the bottom. The Google Pixel 4a only accepts a single physical SIM, but it does support an additional eSIM.
The back has a capacitive fingerprint sensor, so there’s no in-display sensor despite this phone having an OLED panel. This isn’t a big deal, as the fingerprint sensor works very well and can be used to pull down the notification shade with a swipe gesture. However, there’s no option for face recognition on the Pixel 4a.
Google has thankfully ditched the massive bezel of the previous generation for much narrower ones on the Pixel 4a. The borders are still a bit thick but they’re more or less even all around the display. You get a hole-punch cutout for the selfie camera. The display is a bit larger than that of the Pixel 3a, measuring 5.8 inches diagonally. It’s an OLED panel with a full-HD+ resolution. It supports HDR10 playback and is made using Gorilla Glass 3 for scratch protection.
One feature that’s missing compared to last year’s model is the Active Edge sensors. On previous Pixel phones, you used to be able to activate Google Assistant by squeezing the pressure-sensitive side panels. On the other hand, Google has kept the Now Playing feature, which automatically recognises songs being played in the background and displays the title and artist on your lockscreen or always-on display.
In the retail box of the Google Pixel 4a, you’ll find an 18W Type-C charger, a USB Type-C to Type-C cable, a Quick Switch adapter for importing data from an older phone, a SIM tool, and documentation. You don’t get any case or headset.
Google Pixel 4a performance and battery life
The Google Pixel 4a uses the Qualcomm Snapdragon 730G SoC, which is not the most powerful SoC you’ll find in phones at this price, but is good enough. There’s 6GB of LPDDR4X RAM and 128GB of storage, which again, are fairly adequate. The Pixel 4a supports 4G VoLTE, dual-band Wi-Fi ac, Bluetooth 5, NFC, and four satellite navigation systems. There’s no wireless charging or IP rating, but you do get stereo speakers. The Pixel 4a also features Google’s Titan M security hardware for biometric authentication and other security-related functions.
For software, units in the market at the time of the India launch are running Android 10 out of the box, but a final Android 11 update is available. My review unit was already running Android 11 when I began using it. If you’ve used a Pixel smartphone before, you know what to expect. The interface is completely clean, with no bloatware and just the essential Google apps preinstalled. There’s a Personal Safety app from Google which lets you set up emergency contacts, etc. There’s a Pixel Tips app to help first-time Pixel users get acquainted with their smartphone.
Google has incorporated some basic gestures, which can be found in the Settings app. You can enable gestures to quickly access the camera, silence an incoming call, etc. Being a Pixel phone, Google offers a minimum of three years of OS and security updates.
The relatively powerful hardware combined with Google’s lean software makes the usage experience wonderful. Unlocking the phone with the fingerprint sensor is quick, the interface is snappy, and the always-on display is great for peeking at the time or unread alerts. Google Assistant is speedy too, be it transcribing what you just said or fetching search results. The Pixel 4a unfortunately misses out on a higher refresh rate display, even 90Hz, which would have made the experience even better.
I found the display to be pretty good for watching content on. Colours are vivid, blacks are deep, and text is generally sharp. The screen gets very bright too but whites look a bit murky even at full brightness. This is especially noticeable when compared side by side with something like the OnePlus Nord, which is in the same price segment. HDR content looks good, whether played locally or through streaming apps. The stereo speakers sound good and get decently loud, although the bottom-firing one is a bit louder than the earpiece.
Gaming was also enjoyable. Everything from simple titles such as Mars: Mars, to heavier ones such as Call of Duty: Mobile ran smoothly. I didn’t feel any heating issues either, other than the side of the frame getting a bit warm.
The Google Pixel 4a has a 3,140mAh battery, which is a modest capacity by 2020 standards. Unsurprisingly, it didn’t fare too well in our HD video battery loop test, running for a little more than twelve and a half hours. However, I am happy to report that with medium to light real-world usage, I was able to make the Pixel 4a last for one full day on a single charge. On days with lots of camera usage and video watching, it did drain a bit faster, so if you’re expecting a phone that can last more than a day, you might be a little disappointed.
The Pixel 4a can fast-charge its battery with the bundled 18W adapter to about 52 percent in half an hour, and up to 88 percent in an hour. It took about 15-20 minutes more to reach full capacity. Since it uses the USB Power Delivery (PD) standard, you can use any Type-C PD charger to quickly charge the Pixel 4a.
Google Pixel 4a cameras
The Google Pixel 3a had an impressive set of cameras, not just for its segment, but in general. The Pixel 4a sticks to a single front and rear camera, with the same resolutions as their predecessors. The rear camera has a 12.2-megapixel sensor and an f/1.7 aperture, dual-pixel PDAF, and optical stabilisation. The front camera uses an 8-megapixel sensor and has an f/2.0 aperture. Sadly, there isn’t a physical ultra-wide-angle rear camera like you get on the 5G variant of the Pixel 4a, and on most other phones at this price level now.
However, Google hasn’t skimped on camera features in software, which are mostly the same as what you’d get with the flagship Pixel 5. There’s Night Sight, Top Shot, Super Res Zoom, Motion Autofocus, and Live HDR+. Frequent Faces is a feature, which when enabled, is said to recognise and recommend shots that are focused on specific faces that you capture often, when selecting a Top Shot or Motion Photo. When shooting stills, the Google Pixel 4a lets you tweak the exposure and shadows independently before taking a shot, and even shows you the effects of each adjustment in real-time in the viewfinder. For videos, you can manually adjust the exposure too, and tapping the viewfinder once will begin focus tracking.
The camera app has nearly all the shooting modes one would expect. There’s no manual mode, but you can enable RAW capture through the Settings menu.
Landscape photos shot during the day looked stunning. The Google Pixel 4a managed to capture natural-looking colours and well-balanced exposures. Details were fairly good, but when magnified, I noticed a bit of noise, and finer textures and edges didn’t have very good definition. Close-up shots had very good details, rich colours, and a pleasing background blur. In Portrait mode, I could digitally zoom in up to 4x. Portrait shots generally looked striking with good edge detection, details, and colours.
The Pixel 4a did an equally good job with low-light photos. Even without Night Sight, images looked clean with minimal noise, colours were vivid, and details were well defined. Night Sight helps correct the exposure a bit, and in very dark scenes, it can make an impactful difference.
Pixel smartphones have thus far been very good for selfies, and that continues. Selfies shot in daylight pack in very good detail, and thanks to the wide field of view, you can get quite a bit of the background in the frame. Portrait mode works well for selfies too. In low light, Night Sight makes a big difference to the type of photos you can capture. When used in combination with the screen flash (which is more of a fill-light than a flash), the results are even better.
The Google Pixel 4a can shoot up to 4K video at 30fps. During the day, I found the quality and stabilisation to be very good. Videos captured with the selfie camera are also electronically stabilised. Even in low light, video quality is pretty decent, with good exposure and a tolerable amount of shimmer when you walk.
I really wish Google had included an ultra-wide-angle camera, as that would have made the setup pretty much perfect. Even so, both cameras on the Pixel 4a deliver consistent and reliable results.
Verdict: Should you buy the Pixel 4a?
The Google Pixel 4a is being sold on Flipkart at a promotional price of Rs. 29,999, which is a bit lower than its official retail price of Rs. 31,999. I think it’s a good buy at this price for anyone looking to capture good photos and video with their smartphone. Unlike last year’s Pixel 3a, the Pixel 4a isn’t crippled too much in terms of processing power. It features a good SoC as well as enough RAM and storage to offer a decent gaming performance. Battery life might not be as good as what the competition achieves, but despite its small capacity, you should expect this phone to last nearly a full day on average.
The OnePlus Nord is a very tempting competitor to the Google Pixel 4a, and it manages to one-up this phone in almost all areas, on paper anyway. So which one should you buy? That’s a discussion for another article, coming up very soon.
Artificial intelligence: Cheat sheet – TechRepublic
Learn artificial intelligence basics, business use cases, and more in this beginner’s guide to using AI in the enterprise.
Artificial intelligence (AI) is the next big thing in business computing. Its uses come in many forms, from simple tools that respond to customer chat, to complex machine learning systems that predict the trajectory of an entire organization. Popularity does not necessarily lead to familiarity, and despite its constant appearance as a state-of-the-art feature, AI is often misunderstood.
In order to help business leaders understand what AI is capable of, how it can be used, and where to begin an AI journey, it’s essential to first dispel the myths surrounding this huge leap in computing technology. Learn more in this AI cheat sheet. This article is also available as a download, Cheat sheet: Artificial intelligence (free PDF).
What is artificial intelligence?
When AI comes to mind, it’s easy to get pulled into a world of science-fiction robots like Data from Star Trek: The Next Generation, Skynet from the Terminator series, and Marvin the paranoid android from The Hitchhiker’s Guide to the Galaxy.
The reality of AI is nothing like fiction, though. Instead of fully autonomous thinking machines that mimic human intelligence, we live in an age where computers can be taught to perform limited tasks that involve making judgments similar to those made by people, but are far from being able to reason like human beings.
Modern AI can perform image recognition, understand the natural language and writing patterns of humans, make connections between different types of data, identify abnormalities in patterns, strategize, predict, and more.
All artificial intelligence comes down to one core concept: Pattern recognition. At the core of all applications and varieties of AI is the simple ability to identify patterns and make inferences based on those patterns.
SEE: Artificial intelligence: A business leader’s guide (free PDF) (TechRepublic)
AI isn’t truly intelligent in the way we define intelligence: It can’t think and lacks reasoning skills, it doesn’t show preferences or have opinions, and it’s not able to do anything outside of the very narrow scope of its training.
That doesn’t mean AI isn’t useful for businesses and consumers trying to solve real-world problems, it just means that we’re nowhere close to machines that can actually make independent decisions or arrive at conclusions without being given the proper data first. Artificial intelligence is still a marvel of technology, but it’s still far from replicating human intelligence or truly intelligent behavior.
What can artificial intelligence do?
AI’s power lies in its ability to become incredibly skilled at doing the things humans train it to. Microsoft and Alibaba independently built AI machines capable of better reading comprehension than humans, Microsoft has AI that is better at speech recognition than its human builders, and some researchers are predicting that AI will outperform humans in most everything in less than 50 years.
That doesn’t mean those AI creations are truly intelligent–only that they’re capable of performing human-like tasks with greater efficiency than us error-prone organic beings. If you were to try, say, to give a speech recognition AI an image-recognition task, it would fail completely. All AI systems are built for very specific tasks, and they don’t have the capability to do anything else.
Since the COVID-19 pandemic began in early 2020, artificial intelligence and machine learning has seen a surge of activity as businesses rush to fill holes left by employees forced to work remotely, or those who’ve lost jobs due to the financial strain of the pandemic.
The quick adoption of AI during the pandemic highlights another important thing that AI can do: Replace human workers. According to Gartner, 79% of businesses are currently exploring or piloting AI projects, meaning those projects are in the early post-COVID-19 stages of development. What the pandemic has done for AI is cause a shift in priorities and applications: Instead of focusing on financial analysis and consumer insight, post-pandemic AI projects are focusing on customer experience and cost optimization, Algorithmia found.
Like other AI applications, customer experience and cost optimization are based on pattern recognition. In the case of the former, AI bots can perform many basic customer service tasks, freeing employees up to only address cases that need human intervention. AI like this has been particularly widespread during the pandemic, when workers forced out of call centers put stress on the customer service end of business.
What are the business applications of artificial intelligence?
Modern AI systems are capable of amazing things, and it’s not hard to imagine what kind of business tasks and problem solving exercises they could be suited to. Think of any routine task, even incredibly complicated ones, and there’s a possibility an AI can do it more accurately and quickly than a human–just don’t expect it to do science fiction-level reasoning.
In the business world, there are plenty of AI applications, but perhaps none is gaining traction as much as business analytics and its end goal: Prescriptive analytics.
Business analytics is a complicated set of processes that aim to model the present state of a business, predict where it will go if kept on its current trajectory, and model potential futures with a given set of changes. Prior to the AI age, analytics work was slow, cumbersome, and limited in scope.
SEE: Special report: Managing AI and ML in the enterprise (ZDNet) | Download the free PDF version (TechRepublic)
When modeling the past of a business, it’s necessary to account for nearly endless variables, sort through tons of data, and include all of it in an analysis that builds a complete picture of the up-to-the-present state of an organization. Think about the business you’re in and all the things that need to be considered, and then imagine a human trying to calculate all of it–cumbersome, to say the least.
Predicting the future with an established model of the past can be easy enough, but prescriptive analysis, which aims to find the best possible outcome by tweaking an organization’s current course, can be downright impossible without AI help.
SEE: Artificial intelligence ethics policy (TechRepublic Premium)
There are many artificial intelligence software platforms and AI machines designed to do all that heavy lifting, and the results are transforming businesses: What was once out of reach for smaller organizations is now feasible, and businesses of all sizes can make the most of each resource by using artificial intelligence to design the perfect future.
Analytics may be the rising star of business AI, but it’s hardly the only application of artificial intelligence in the commercial and industrial worlds. Other AI use cases for businesses include the following.
- Recruiting and employment: Human beings can often overlook qualified candidates, or candidates can fail to make themselves noticed. Artificial intelligence can streamline recruiting by filtering through larger numbers of candidates more quickly, and by noticing qualified people who may go overlooked.
- Fraud detection: Artificial intelligence is great at picking up on subtle differences and irregular behavior. If trained to monitor financial and banking traffic, AI systems can pick up on subtle indicators of fraud that humans may miss.
- Cybersecurity: Just as with financial irregularities, artificial intelligence is great at detecting indicators of hacking and other cybersecurity issues.
- Data management: Using AI to categorize raw data and find relations between items that were previously unknown.
- Customer relations: Modern AI-powered chatbots are incredibly good at carrying on conversations thanks to natural language processing. AI chatbots can be a great first line of customer interaction.
- Healthcare: Not only are some AIs able to detect cancer and other health concerns before doctors, they can also provide feedback on patient care based on long-term records and trends.
- Predicting market trends: Much like prescriptive analysis in the business analytics world, AI systems can be trained to predict trends in larger markets, which can lead to businesses getting a jump on emerging trends.
- Reducing energy use: Artificial intelligence can streamline energy use in buildings, and even across cities, as well as make better predictions for construction planning, oil and gas drilling, and other energy-centric projects.
- Marketing: AI systems can be trained to increase the value of marketing both toward individuals and larger markets, helping organizations save money and get better marketing results.
If a problem involves data, there’s a good possibility that AI can help. This list is hardly complete, and new innovations in AI and machine learning are being made all the time.
What AI platforms are available?
When adopting an AI strategy, it’s important to know what sorts of software are available for business-focused AI. There are a wide variety of platforms available from the usual cloud-hosting suspects like Google, AWS, Microsoft, and IBM, and choosing the right one can mean the difference between success and failure.
AWS Machine Learning offers a wide variety of tools that run in the AWS cloud. AI services, pre-built frameworks, analytics tools, and more are all available, with many designed to take the legwork out of getting started. AWS offers pre-built algorithms, one-click machine learning training, and training tools for developers getting started in, or expanding their knowledge of AI development.
Google Cloud offers similar AI solutions to AWS, as well as having several pre-built total AI solutions that organizations can (ideally) plug into their organizations with minimal effort. Google’s AI offerings include the TensorFlow open source machine learning library.
Microsoft’s AI platform comes with pre-generated services, ready-to-deploy cloud infrastructure, and a variety of additional AI tools that can be plugged in to existing models. Its AI Lab also offers a wide range of AI apps that developers can tinker with and learn from what others have done. Microsoft also offers an AI school with educational tracks specifically for business applications.
Watson is IBM’s version of cloud-hosted machine learning and business AI, but it goes a bit further with more AI options. IBM offers on-site servers custom built for AI tasks for businesses that don’t want to rely on cloud hosting, and it also has IBM AI OpenScale, an AI platform that can be integrated into other cloud hosting services, which could help to avoid vendor lock-in.
Before choosing an AI platform, it’s important to determine what sorts of skills you have available within your organization, and what skills you’ll want to focus on when hiring new AI team members. The platforms can require specialization in different sorts of development and data science skills, so be sure to plan accordingly.
What AI skills will businesses need to invest in?
With business AI taking so many forms, it can be tough to determine what skills an organization needs to implement it.
As previously reported by TechRepublic, finding employees with the right set of AI skills is the problem most commonly cited by organizations looking to get started with artificial intelligence.
Skills needed for an AI project differ based on business needs and the platform being used, though most of the biggest platforms (like those listed above) support most, if not all, of the most commonly used programming languages and skills needed for AI.
SEE: Don’t miss our latest coverage about AI (TechRepublic on Flipboard)
TechRepublic covered in March 2018 the 10 most in-demand AI skills, which is an excellent summary of the types of training an organization should look at when building or expanding a business AI team:
Many business AI platforms offer training courses in the specifics of running their architecture and the programming languages needed to develop more AI tools. Businesses that are serious about AI should plan to either hire new employees or give existing ones the time and resources necessary to train in the skills needed to make AI projects succeed.
How can businesses start using artificial intelligence?
Getting started with business AI isn’t as easy as simply spending money on an AI platform provider and spinning up some pre-built models and algorithms. There’s a lot that goes into successfully adding AI to an organization.
At the heart of it all is good project planning. Adding artificial intelligence to a business, no matter how it will be used, is just like any business transformation initiative. Here is an outline of just one way to approach getting started with business AI.
Determine your AI objective. Figure out how AI can be used in your organization and to what end. By focusing on a narrower implementation with a specific goal, you can better allocate resources.
Identify what needs to happen to get there. Once you know where you want to be, you can figure out where you are and how to make the journey. This could include starting to sort existing data, gathering new data, hiring talent, and other pre-project steps.
Build a team. With an end goal in sight and a plan to get there, it’s time to assemble the best team to make it happen. This can include current employees, but don’t be afraid to go outside the organization to find the most qualified people. Also, be sure to allow existing staff to train so they have the opportunity to contribute to the project.
Choose an AI platform. Some AI platforms may be better suited to particular projects, but by and large they all offer similar products in order to compete with each other. Let your team give recommendations on which AI platform to choose–they’re the experts who will be in the trenches.
Begin implementation. With a goal, team, and platform, you’re ready to start working in earnest. This won’t be quick: AI machines need to be trained, testing on subsets of data has to be performed, and lots of tweaks will need to be made before a business AI is ready to hit the real world.
- Technology5 months ago
First iPhone jailbreak in four years released
- Technology4 months ago
The Complete Guide for Building a Website
- Technology4 months ago
Check out the new Gaming Leader: Playstation 5
- Space5 months ago
NASA launches its First Space Flight in the U.S since 2011
- Technology3 months ago
Is OnePlus Nord the Best Phone Under Rs. 30,000?
- Politics3 months ago
US Politicians Considering to Ban TikTok App
- Entertainment3 months ago
Grimes Slams Baby Daddy Elon Musk After He Tweets ‘Pronouns Suck’
- Politics3 months ago
Beirut: How judges responded to warnings about ammonium nitrate stored at the Beirut port