Connect with us

A team of scientists from the University of Michigan Rogel Cancer Center has developed the first drug-like compounds to inhibit a key family of enzymes whose malfunction is associated with several types of cancer, including an aggressive form of childhood leukaemia.

The enzymes — known as the nuclear receptor-binding SET domain (NSD) family of histone methyltransferases — have long been an attractive drug target, but efforts to attack them have previously proved elusive because the shape of the binding sites in these enzymes makes it difficult for drug-like molecules to bind to it.

Scientists Develop First Drug Like Compounds to Inhibit Elusive Cancer Linked Enzymes

The research team — led by Tomasz Cierpicki, Ph.D., and Jolanta Grembecka, PhD — used a variety of techniques including X-ray crystallography and nuclear magnetic resonance to develop first-in-class inhibitors of a key protein known as NSD1, according to findings published in Nature Chemical Biology.

The team’s lead compound — known as BT5 — showed promising activity in leukaemia cells with the NUP98-NSD1 chromosomal translocation that is seen in a subset of pediatric leukaemia patients.

“Our study, which was years in the making, demonstrates that targeting this key enzyme with small-molecule inhibitors is a feasible approach,” says Cierpicki, an associate professor of biophysics and pathology at U-M. “These findings will facilitate the development of the next generation of potent and selective inhibitors of these enzymes, which are overexpressed, mutated or undergo translocations in several types of cancer.”

Source: University of Michigan Health System

Source link

Continue Reading


New technology from Stanford scientists finds long-hidden quakes, and possible clues about how earthquakes evolve

EarthquakeAI2 705x470 1

Measures of Earth’s vibrations zigged and zagged across Mostafa Mousavi’s screen one morning in Memphis, Tenn. As part of his PhD studies in geophysics, he sat scanning earthquake signals recorded the night before, verifying that decades-old algorithms had detected true earthquakes rather than tremors generated by ordinary things like crashing waves, passing trucks or stomping football fans.

“I did all this tedious work for six months, looking at continuous data,” Mousavi, now a research scientist at Stanford’s School of Earth, Energy & Environmental Sciences (Stanford Earth), recalled recently. “That was the point I thought, ‘There has to be a much better way to do this stuff.’”

This was in 2013. Handheld smartphones were already loaded with algorithms that could break down speech into sound waves and come up with the most likely words in those patterns. Using artificial intelligence, they could even learn from past recordings to become more accurate over time.

New technology from Stanford scientists finds long hidden quakes and possible

The Loma Prieta earthquake, which severely shook the San Francisco and Monterey Bay regions in October 1989, occurred mostly on a previously unknown fault. Image credit: J.K. Nakata, USGS

Seismic waves and sound waves aren’t so different. One moves through rock and fluid, the other through air. Yet while machine learning had transformed the way personal computers process and interact with voice and sound, the algorithms used to detect earthquakes in streams of seismic data have hardly changed since the 1980s.

That has left a lot of earthquakes undetected.

Big quakes are hard to miss, but they’re rare. Meanwhile, imperceptibly small quakes happen all the time. Occurring on the same faults as bigger earthquakes – and involving the same physics and the same mechanisms – these “microquakes” represent a cache of untapped information about how earthquakes evolve – but only if scientists can find them.

In a recent paper published in Nature Communications, Mousavi and co-authors describe a new method for using artificial intelligence to bring into focus millions of these subtle shifts of the Earth. “By improving our ability to detect and locate these very small earthquakes, we can get a clearer view of how earthquakes interact or spread out along the fault, how they get started, even how they stop,” said Stanford geophysicist Gregory Beroza, one of the paper’s authors.

Focusing on what matters

Mousavi began working on technology to automate earthquake detection soon after his stint examining daily seismograms in Memphis, but his models struggled to tune out the noise inherent to seismic data. A few years later, after joining Beroza’s lab at Stanford in 2017, he started to think about how to solve this problem using machine learning.

New technology from Stanford scientists finds long hidden quakes and possible

Earthquakes detected and located by EarthquakeTransformer in the Tottori area. Image credit: Mousavi et al., 2020 Nature Communications

The group has produced a series of increasingly powerful detectors. A 2018 model called PhaseNet, developed by Beroza and graduate student Weiqiang Zhu, adapted algorithms from medical image processing to excel at phase-picking, which involves identifying the precise start of two different types of seismic waves. Another machine learning model, released in 2019 and dubbed CRED, was inspired by voice-trigger algorithms in virtual assistant systems and proved effective at detection. Both models learned the fundamental patterns of earthquake sequences from a relatively small set of seismograms recorded only in northern California.

In the Nature Communications paper, the authors report they’ve developed a new model to detect very small earthquakes with weak signals that current methods usually overlook, and to pick out the precise timing of the seismic phases using earthquake data from around the world. They call it Earthquake Transformer.

According to Mousavi, the model builds on PhaseNet and CRED, and “embeds those insights I got from the time I was doing all of this manually.” Specifically, Earthquake Transformer mimics the way human analysts look at the set of wiggles as a whole and then hone in on a small section of interest.

People do this intuitively in daily life – tuning out less important details to focus more intently on what matters. Computer scientists call it an “attention mechanism” and frequently use it to improve text translations. But it’s new to the field of automated earthquake detection, Mousavi said. “I envision that this new generation of detectors and phase-pickers will be the norm for earthquake monitoring within the next year or two,” he said.

The technology could allow analysts to focus on extracting insights from a more complete catalogue of earthquakes, freeing up their time to think more about what the pattern of earthquakes means, said Beroza, the Wayne Loel Professor of Earth Science at Stanford Earth.

Hidden faults

Understanding patterns in the accumulation of small tremors over decades or centuries could be key to minimizing surprises – and damage – when a larger quake strikes.

The 1989 Loma Prieta quake ranks as one of the most destructive earthquake disasters in U.S. history, and as one of the largest to hit northern California in the past century. It’s a distinction that speaks less to extraordinary power in the case of Loma Prieta than to gaps in earthquake preparedness, hazard mapping and building codes – and to the extreme rarity of large earthquakes.

Only about one in five of the approximately 500,000 earthquakes detected globally by seismic sensors every year produce shaking strong enough for people to notice. In a typical year, perhaps 100 quakes will cause damage.

In the late 1980s, computers were already at work analyzing digitally recorded seismic data, and they determined the occurrence and location of earthquakes like Loma Prieta within minutes. Limitations in both the computers and the waveform data, however, left many small earthquakes undetected and many larger earthquakes only partially measured.

After the harsh lesson of Loma Prieta, many California communities have come to rely on maps showing fault zones and the areas where quakes are likely to do the most damage. Fleshing out the record of past earthquakes with Earthquake Transformer and other tools could make those maps more accurate and help to reveal faults that might otherwise come to light only in the wake of destruction from a larger quake, as happened with Loma Prieta in 1989, and with the magnitude-6.7 Northridge earthquake in Los Angeles five years later.

“The more information we can get on the deep, three-dimensional fault structure through improved monitoring of small earthquakes, the better we can anticipate earthquakes that lurk in the future,” Beroza said.

Earthquake Transformer

To determine an earthquake’s location and magnitude, existing algorithms and human experts alike look for the arrival time of two types of waves. The first set, known as primary or P waves, advance quickly – pushing, pulling and compressing the ground like a Slinky as they move through it. Next come shear or S waves, which travel more slowly but can be more destructive as they move the Earthside to side or up and down.

To test the Earthquake Transformer, the team wanted to see how it worked with earthquakes not included in training data that are used to teach algorithms what a true earthquake and its seismic phases look like. The training data included one million hand-labelled seismograms recorded mostly over the past two decades where earthquakes happen globally, excluding Japan. For the test, they selected five weeks of continuous data recorded in the region of Japan shaken 20 years ago by the magnitude-6.6 Tottori earthquake and its aftershocks.

The model detected and located 21,092 events – more than two and a half times the number of earthquakes picked out by hand, using data from only 18 of the 57 stations that Japanese scientists originally used to study the sequence. Earthquake Transformer proved particularly effective for the tiny earthquakes that are harder for humans to pick out and being recorded in overwhelming numbers as seismic sensors multiply.

“Previously, people had designed algorithms to say, find the P wave. That’s a relatively simple problem,” explained co-author William Ellsworth, a research professor in geophysics at Stanford. Pinpointing the start of the S wave is more difficult, he said, because it emerges from the erratic last gasps of the fast-moving P waves. Other algorithms have been able to produce extremely detailed earthquake catalogs, including huge numbers of small earthquakes missed by analysts – but their pattern-matching algorithms work only in the region supplying the training data.

With Earthquake Transformer running on a simple computer, analysis that would ordinarily take months of expert labor was completed within 20 minutes. That speed is made possible by algorithms that search for the existence of an earthquake and the timing of the seismic phases in tandem, using information gleaned from each search to narrow down the solution for the others.

“Earthquake Transformer gets many more earthquakes than other methods, whether it’s people sitting and trying to analyze things by looking at the waveforms, or older computer methods,” Ellsworth said. “We’re getting a much deeper look at the earthquake process, and we’re doing it more efficiently and accurately.”

The researchers trained and tested Earthquake Transformer on historic data, but the technology is ready to flag tiny earthquakes almost as soon as they happen. According to Beroza, “Earthquake monitoring using machine learning in near real-time is coming very soon.”

Source: Stanford University

Source link

Continue Reading


Google Pixel 4a Review | NDTV Gadgets 360

After the disappointing launch price of the Pixel 3a in India last year, and the decision to not launch the Pixel 4, there has been little reason to get excited about the new models launched this year. Google is not launching the Pixel 5 or the Pixel 4a 5G in India, at least not yet, but it has launched the Pixel 4a. The most affordable member of this year’s Pixel series, the Pixel 4a is priced a lot more aggressively this time around in India, at Rs. 31,999.

This year, Google is keeping things simple. There’s just one version of the Pixel 4a, so no XL option. It’s also available in only one configuration, with 6GB of RAM and 128GB of storage, and in only one colour – Just Black. I have spent a lot of time with it following my initial impressions a few weeks ago, and now it’s time to see if Google has done enough this year to get people interested again.

Google Pixel 4a design

There’s something very likeable about the Google Pixel 4a’s design. It’s not flashy or in-your-face; in fact it’s the exact opposite and yet it looks attractive. Google has used a unibody polycarbonate shell with a soft-touch matte finish. It looks nice and doesn’t attract fingerprints. The Pixel 4a is relatively slim at 8.2mm and really light, at just 143g. The overall compact dimensions of the body and the rounded edges make it a very comfortable phone to handle.

The volume and power buttons are placed on the right, and offer good tactile feedback. There’s a headphone jack on the top, a tray for a single Nano-SIM on the left, and the speaker and USB Type-C port on the bottom. The Google Pixel 4a only accepts a single physical SIM, but it does support an additional eSIM.

The back has a capacitive fingerprint sensor, so there’s no in-display sensor despite this phone having an OLED panel. This isn’t a big deal, as the fingerprint sensor works very well and can be used to pull down the notification shade with a swipe gesture. However, there’s no option for face recognition on the Pixel 4a.

The Pixel 4a has a simple design and yet looks good


Google has thankfully ditched the massive bezel of the previous generation for much narrower ones on the Pixel 4a. The borders are still a bit thick but they’re more or less even all around the display. You get a hole-punch cutout for the selfie camera. The display is a bit larger than that of the Pixel 3a, measuring 5.8 inches diagonally. It’s an OLED panel with a full-HD+ resolution. It supports HDR10 playback and is made using Gorilla Glass 3 for scratch protection.

One feature that’s missing compared to last year’s model is the Active Edge sensors. On previous Pixel phones, you used to be able to activate Google Assistant by squeezing the pressure-sensitive side panels. On the other hand, Google has kept the Now Playing feature, which automatically recognises songs being played in the background and displays the title and artist on your lockscreen or always-on display.

In the retail box of the Google Pixel 4a, you’ll find an 18W Type-C charger, a USB Type-C to Type-C cable, a Quick Switch adapter for importing data from an older phone, a SIM tool, and documentation. You don’t get any case or headset.

Google Pixel 4a performance and battery life

The Google Pixel 4a uses the Qualcomm Snapdragon 730G SoC, which is not the most powerful SoC you’ll find in phones at this price, but is good enough. There’s 6GB of LPDDR4X RAM and 128GB of storage, which again, are fairly adequate. The Pixel 4a supports 4G VoLTE, dual-band Wi-Fi ac, Bluetooth 5, NFC, and four satellite navigation systems. There’s no wireless charging or IP rating, but you do get stereo speakers. The Pixel 4a also features Google’s Titan M security hardware for biometric authentication and other security-related functions.

pixel 4a review screen xx

The Pixel 4a runs lean stock Android without any bloatware


For software, units in the market at the time of the India launch are running Android 10 out of the box, but a final Android 11 update is available. My review unit was already running Android 11 when I began using it. If you’ve used a Pixel smartphone before, you know what to expect. The interface is completely clean, with no bloatware and just the essential Google apps preinstalled. There’s a Personal Safety app from Google which lets you set up emergency contacts, etc. There’s a Pixel Tips app to help first-time Pixel users get acquainted with their smartphone.

Google has incorporated some basic gestures, which can be found in the Settings app. You can enable gestures to quickly access the camera, silence an incoming call, etc. Being a Pixel phone, Google offers a minimum of three years of OS and security updates.

The relatively powerful hardware combined with Google’s lean software makes the usage experience wonderful. Unlocking the phone with the fingerprint sensor is quick, the interface is snappy, and the always-on display is great for peeking at the time or unread alerts. Google Assistant is speedy too, be it transcribing what you just said or fetching search results. The Pixel 4a unfortunately misses out on a higher refresh rate display, even 90Hz, which would have made the experience even better.

I found the display to be pretty good for watching content on. Colours are vivid, blacks are deep, and text is generally sharp. The screen gets very bright too but whites look a bit murky even at full brightness. This is especially noticeable when compared side by side with something like the OnePlus Nord, which is in the same price segment. HDR content looks good, whether played locally or through streaming apps. The stereo speakers sound good and get decently loud, although the bottom-firing one is a bit louder than the earpiece.

Gaming was also enjoyable. Everything from simple titles such as Mars: Mars, to heavier ones such as Call of Duty: Mobile ran smoothly. I didn’t feel any heating issues either, other than the side of the frame getting a bit warm.

pixel 4a review selfie camera s

Apps and games run well on the Pixel 4a, with no real heating issues


The Google Pixel 4a has a 3,140mAh battery, which is a modest capacity by 2020 standards. Unsurprisingly, it didn’t fare too well in our HD video battery loop test, running for a little more than twelve and a half hours. However, I am happy to report that with medium to light real-world usage, I was able to make the Pixel 4a last for one full day on a single charge. On days with lots of camera usage and video watching, it did drain a bit faster, so if you’re expecting a phone that can last more than a day, you might be a little disappointed.

The Pixel 4a can fast-charge its battery with the bundled 18W adapter to about 52 percent in half an hour, and up to 88 percent in an hour. It took about 15-20 minutes more to reach full capacity. Since it uses the USB Power Delivery (PD) standard, you can use any Type-C PD charger to quickly charge the Pixel 4a.

Google Pixel 4a cameras

The Google Pixel 3a had an impressive set of cameras, not just for its segment, but in general. The Pixel 4a sticks to a single front and rear camera, with the same resolutions as their predecessors. The rear camera has a 12.2-megapixel sensor and an f/1.7 aperture, dual-pixel PDAF, and optical stabilisation. The front camera uses an 8-megapixel sensor and has an f/2.0 aperture. Sadly, there isn’t a physical ultra-wide-angle rear camera like you get on the 5G variant of the Pixel 4a, and on most other phones at this price level now.

pixel 4a review camera dd

The Pixel 4a has just a single rear camera

However, Google hasn’t skimped on camera features in software, which are mostly the same as what you’d get with the flagship Pixel 5. There’s Night Sight, Top Shot, Super Res Zoom, Motion Autofocus, and Live HDR+. Frequent Faces is a feature, which when enabled, is said to recognise and recommend shots that are focused on specific faces that you capture often, when selecting a Top Shot or Motion Photo. When shooting stills, the Google Pixel 4a lets you tweak the exposure and shadows independently before taking a shot, and even shows you the effects of each adjustment in real-time in the viewfinder. For videos, you can manually adjust the exposure too, and tapping the viewfinder once will begin focus tracking.

The camera app has nearly all the shooting modes one would expect. There’s no manual mode, but you can enable RAW capture through the Settings menu.

Landscape photos shot during the day looked stunning. The Google Pixel 4a managed to capture natural-looking colours and well-balanced exposures. Details were fairly good, but when magnified, I noticed a bit of noise, and finer textures and edges didn’t have very good definition. Close-up shots had very good details, rich colours, and a pleasing background blur. In Portrait mode, I could digitally zoom in up to 4x. Portrait shots generally looked striking with good edge detection, details, and colours.

Google Pixel 4a Review NDTV Gadgets 360

Google Pixel 4a camera sample (tap to see larger image)

1603502758 102 Google Pixel 4a Review NDTV Gadgets 360

Google Pixel 4a close-up camera sample (tap to see larger image)

1603502758 848 Google Pixel 4a Review NDTV Gadgets 360

Google Pixel 4a portrait camera sample (tap to see larger image)


The Pixel 4a did an equally good job with low-light photos. Even without Night Sight, images looked clean with minimal noise, colours were vivid, and details were well defined. Night Sight helps correct the exposure a bit, and in very dark scenes, it can make an impactful difference.

Pixel smartphones have thus far been very good for selfies, and that continues. Selfies shot in daylight pack in very good detail, and thanks to the wide field of view, you can get quite a bit of the background in the frame. Portrait mode works well for selfies too. In low light, Night Sight makes a big difference to the type of photos you can capture. When used in combination with the screen flash (which is more of a fill-light than a flash), the results are even better.

1603502758 548 Google Pixel 4a Review NDTV Gadgets 360

Google Pixel 4a Night Sight camera sample (tap tp see larger image)

1603502758 332 Google Pixel 4a Review NDTV Gadgets 360

Google Pixel 4a selfie camera sample (tap to see larger image)


The Google Pixel 4a can shoot up to 4K video at 30fps. During the day, I found the quality and stabilisation to be very good. Videos captured with the selfie camera are also electronically stabilised. Even in low light, video quality is pretty decent, with good exposure and a tolerable amount of shimmer when you walk.

I really wish Google had included an ultra-wide-angle camera, as that would have made the setup pretty much perfect. Even so, both cameras on the Pixel 4a deliver consistent and reliable results.

Verdict: Should you buy the Pixel 4a?

The Google Pixel 4a is being sold on Flipkart at a promotional price of Rs. 29,999, which is a bit lower than its official retail price of Rs. 31,999. I think it’s a good buy at this price for anyone looking to capture good photos and video with their smartphone. Unlike last year’s Pixel 3a, the Pixel 4a isn’t crippled too much in terms of processing power. It features a good SoC as well as enough RAM and storage to offer a decent gaming performance. Battery life might not be as good as what the competition achieves, but despite its small capacity, you should expect this phone to last nearly a full day on average.

The OnePlus Nord is a very tempting competitor to the Google Pixel 4a, and it manages to one-up this phone in almost all areas, on paper anyway. So which one should you buy? That’s a discussion for another article, coming up very soon.

Source link

Continue Reading


Artificial intelligence: Cheat sheet – TechRepublic

istock 925078944

Learn artificial intelligence basics, business use cases, and more in this beginner’s guide to using AI in the enterprise.

Artificial intelligence (AI) is the next big thing in business computing. Its uses come in many forms, from simple tools that respond to customer chat, to complex machine learning systems that predict the trajectory of an entire organization. Popularity does not necessarily lead to familiarity, and despite its constant appearance as a state-of-the-art feature, AI is often misunderstood. 

In order to help business leaders understand what AI is capable of, how it can be used, and where to begin an AI journey, it’s essential to first dispel the myths surrounding this huge leap in computing technology. Learn more in this AI cheat sheet. This article is also available as a download, Cheat sheet: Artificial intelligence (free PDF).

SEE: All of TechRepublic’s cheat sheets and smart person’s guides

What is artificial intelligence?

When AI comes to mind, it’s easy to get pulled into a world of science-fiction robots like Data from Star Trek: The Next Generation, Skynet from the Terminator series, and Marvin the paranoid android from The Hitchhiker’s Guide to the Galaxy

The reality of AI is nothing like fiction, though. Instead of fully autonomous thinking machines that mimic human intelligence, we live in an age where computers can be taught to perform limited tasks that involve making judgments similar to those made by people, but are far from being able to reason like human beings. 

Modern AI can perform image recognition, understand the natural language and writing patterns of humans, make connections between different types of data, identify abnormalities in patterns, strategize, predict, and more. 

All artificial intelligence comes down to one core concept: Pattern recognition. At the core of all applications and varieties of AI is the simple ability to identify patterns and make inferences based on those patterns. 

SEE: Artificial intelligence: A business leader’s guide (free PDF) (TechRepublic)

AI isn’t truly intelligent in the way we define intelligence: It can’t think and lacks reasoning skills, it doesn’t show preferences or have opinions, and it’s not able to do anything outside of the very narrow scope of its training. 

That doesn’t mean AI isn’t useful for businesses and consumers trying to solve real-world problems, it just means that we’re nowhere close to machines that can actually make independent decisions or arrive at conclusions without being given the proper data first. Artificial intelligence is still a marvel of technology, but it’s still far from replicating human intelligence or truly intelligent behavior.

Additional resources

What can artificial intelligence do?

AI’s power lies in its ability to become incredibly skilled at doing the things humans train it to. Microsoft and Alibaba independently built AI machines capable of better reading comprehension than humans, Microsoft has AI that is better at speech recognition than its human builders, and some researchers are predicting that AI will outperform humans in most everything in less than 50 years.

That doesn’t mean those AI creations are truly intelligent–only that they’re capable of performing human-like tasks with greater efficiency than us error-prone organic beings. If you were to try, say, to give a speech recognition AI an image-recognition task, it would fail completely. All AI systems are built for very specific tasks, and they don’t have the capability to do anything else. 

Since the COVID-19 pandemic began in early 2020, artificial intelligence and machine learning has seen a surge of activity as businesses rush to fill holes left by employees forced to work remotely, or those who’ve lost jobs due to the financial strain of the pandemic. 

The quick adoption of AI during the pandemic highlights another important thing that AI can do: Replace human workers. According to Gartner, 79% of businesses are currently exploring or piloting AI projects, meaning those projects are in the early post-COVID-19 stages of development. What the pandemic has done for AI is cause a shift in priorities and applications: Instead of focusing on financial analysis and consumer insight, post-pandemic AI projects are focusing on customer experience and cost optimization, Algorithmia found.

Like other AI applications, customer experience and cost optimization are based on pattern recognition. In the case of the former, AI bots can perform many basic customer service tasks, freeing employees up to only address cases that need human intervention. AI like this has been particularly widespread during the pandemic, when workers forced out of call centers put stress on the customer service end of business.

Additional resources

What are the business applications of artificial intelligence?

Modern AI systems are capable of amazing things, and it’s not hard to imagine what kind of business tasks and problem solving exercises they could be suited to. Think of any routine task, even incredibly complicated ones, and there’s a possibility an AI can do it more accurately and quickly than a human–just don’t expect it to do science fiction-level reasoning.

In the business world, there are plenty of AI applications, but perhaps none is gaining traction as much as business analytics and its end goal: Prescriptive analytics.

Business analytics is a complicated set of processes that aim to model the present state of a business, predict where it will go if kept on its current trajectory, and model potential futures with a given set of changes. Prior to the AI age, analytics work was slow, cumbersome, and limited in scope.

SEE: Special report: Managing AI and ML in the enterprise (ZDNet) | Download the free PDF version (TechRepublic)

When modeling the past of a business, it’s necessary to account for nearly endless variables, sort through tons of data, and include all of it in an analysis that builds a complete picture of the up-to-the-present state of an organization. Think about the business you’re in and all the things that need to be considered, and then imagine a human trying to calculate all of it–cumbersome, to say the least.

Predicting the future with an established model of the past can be easy enough, but prescriptive analysis, which aims to find the best possible outcome by tweaking an organization’s current course, can be downright impossible without AI help. 

SEE: Artificial intelligence ethics policy (TechRepublic Premium)

There are many artificial intelligence software platforms and AI machines designed to do all that heavy lifting, and the results are transforming businesses: What was once out of reach for smaller organizations is now feasible, and businesses of all sizes can make the most of each resource by using artificial intelligence to design the perfect future. 

Analytics may be the rising star of business AI, but it’s hardly the only application of artificial intelligence in the commercial and industrial worlds. Other AI use cases for businesses include the following. 

  • Recruiting and employment: Human beings can often overlook qualified candidates, or candidates can fail to make themselves noticed. Artificial intelligence can streamline recruiting by filtering through larger numbers of candidates more quickly, and by noticing qualified people who may go overlooked. 
  • Fraud detection: Artificial intelligence is great at picking up on subtle differences and irregular behavior. If trained to monitor financial and banking traffic, AI systems can pick up on subtle indicators of fraud that humans may miss.
  • Cybersecurity: Just as with financial irregularities, artificial intelligence is great at detecting indicators of hacking and other cybersecurity issues.
  • Data management: Using AI to categorize raw data and find relations between items that were previously unknown.
  • Customer relations: Modern AI-powered chatbots are incredibly good at carrying on conversations thanks to natural language processing. AI chatbots can be a great first line of customer interaction.
  • Healthcare: Not only are some AIs able to detect cancer and other health concerns before doctors, they can also provide feedback on patient care based on long-term records and trends.
  • Predicting market trends: Much like prescriptive analysis in the business analytics world, AI systems can be trained to predict trends in larger markets, which can lead to businesses getting a jump on emerging trends.
  • Reducing energy use: Artificial intelligence can streamline energy use in buildings, and even across cities, as well as make better predictions for construction planning, oil and gas drilling, and other energy-centric projects.
  • Marketing: AI systems can be trained to increase the value of marketing both toward individuals and larger markets, helping organizations save money and get better marketing results.

If a problem involves data, there’s a good possibility that AI can help. This list is hardly complete, and new innovations in AI and machine learning are being made all the time.

Additional resources

What AI platforms are available?

When adopting an AI strategy, it’s important to know what sorts of software are available for business-focused AI. There are a wide variety of platforms available from the usual cloud-hosting suspects like Google, AWS, Microsoft, and IBM, and choosing the right one can mean the difference between success and failure.

AWS Machine Learning offers a wide variety of tools that run in the AWS cloud. AI services, pre-built frameworks, analytics tools, and more are all available, with many designed to take the legwork out of getting started. AWS offers pre-built algorithms, one-click machine learning training, and training tools for developers getting started in, or expanding their knowledge of AI development.

Google Cloud offers similar AI solutions to AWS, as well as having several pre-built total AI solutions that organizations can (ideally) plug into their organizations with minimal effort. Google’s AI offerings include the TensorFlow open source machine learning library.

Microsoft’s AI platform comes with pre-generated services, ready-to-deploy cloud infrastructure, and a variety of additional AI tools that can be plugged in to existing models. Its AI Lab also offers a wide range of AI apps that developers can tinker with and learn from what others have done. Microsoft also offers an AI school with educational tracks specifically for business applications. 

Watson is IBM’s version of cloud-hosted machine learning and business AI, but it goes a bit further with more AI options. IBM offers on-site servers custom built for AI tasks for businesses that don’t want to rely on cloud hosting, and it also has IBM AI OpenScale, an AI platform that can be integrated into other cloud hosting services, which could help to avoid vendor lock-in.

Before choosing an AI platform, it’s important to determine what sorts of skills you have available within your organization, and what skills you’ll want to focus on when hiring new AI team members. The platforms can require specialization in different sorts of development and data science skills, so be sure to plan accordingly.

Additional resources

What AI skills will businesses need to invest in?

With business AI taking so many forms, it can be tough to determine what skills an organization needs to implement it. 

As previously reported by TechRepublic, finding employees with the right set of AI skills is the problem most commonly cited by organizations looking to get started with artificial intelligence. 

Skills needed for an AI project differ based on business needs and the platform being used, though most of the biggest platforms (like those listed above) support most, if not all, of the most commonly used programming languages and skills needed for AI.

SEE: Don’t miss our latest coverage about AI (TechRepublic on Flipboard)

TechRepublic covered in March 2018 the 10 most in-demand AI skills, which is an excellent summary of the types of training an organization should look at when building or expanding a business AI team:

  1. Machine learning 
  2. Python
  3. R 
  4. Data science
  5. Hadoop
  6. Big data
  7. Java 
  8. Data mining
  9. Spark
  10. SAS

Many business AI platforms offer training courses in the specifics of running their architecture and the programming languages needed to develop more AI tools. Businesses that are serious about AI should plan to either hire new employees or give existing ones the time and resources necessary to train in the skills needed to make AI projects succeed.

Additional resources

How can businesses start using artificial intelligence?

Getting started with business AI isn’t as easy as simply spending money on an AI platform provider and spinning up some pre-built models and algorithms. There’s a lot that goes into successfully adding AI to an organization.

At the heart of it all is good project planning. Adding artificial intelligence to a business, no matter how it will be used, is just like any business transformation initiative. Here is an outline of just one way to approach getting started with business AI.

  1. Determine your AI objective. Figure out how AI can be used in your organization and to what end. By focusing on a narrower implementation with a specific goal, you can better allocate resources.

  2. Identify what needs to happen to get there. Once you know where you want to be, you can figure out where you are and how to make the journey. This could include starting to sort existing data, gathering new data, hiring talent, and other pre-project steps.

  3. Build a team. With an end goal in sight and a plan to get there, it’s time to assemble the best team to make it happen. This can include current employees, but don’t be afraid to go outside the organization to find the most qualified people. Also, be sure to allow existing staff to train so they have the opportunity to contribute to the project.

  4. Choose an AI platform. Some AI platforms may be better suited to particular projects, but by and large they all offer similar products in order to compete with each other. Let your team give recommendations on which AI platform to choose–they’re the experts who will be in the trenches.

  5. Begin implementation. With a goal, team, and platform, you’re ready to start working in earnest. This won’t be quick: AI machines need to be trained, testing on subsets of data has to be performed, and lots of tweaks will need to be made before a business AI is ready to hit the real world.

Additional resources

Source link

Continue Reading