Connect with us

A group of Google artificial intelligence researchers sent a sweeping list of demands to management calling for new policies and leadership changes, escalating a conflict at one of the company’s prized units.

The note centres on the departure of Google AI ethics researcher Timnit Gebru, which set off protests inside the company. Citing that situation, the employees called for a company vice president, Megan Kacholia, to no longer be part of their reporting chain. “We have lost trust in her as a leader,” the researchers wrote, according to a copy of the later obtained by Bloomberg.

Gebru has said she was fired after the company rejected a research paper she co-authored that questioned an AI technology at the heart of Google’s search engine. The company has said she resigned and Google’s Chief Executive Officer Sundar Pichai told staff he is investigating the incident.

“Google’s short-sighted decision to fire and retaliate against a core member of the Ethical AI team makes it clear that we need swift and structural changes if this work is to continue, and if the legitimacy of the field as a whole is to persevere,” the letter reads.

It was sent Wednesday to officials including Pichai by employee Alex Hanna, who worked with Gebru, on behalf of Google’s Ethical AI team.

“This research must be able to contest the company’s short-term interests and immediate revenue agendas, as well as to investigate AI that is deployed by Google’s competitors with similar ethical motives,” the researchers added.

The letter urges Google to offer Gebru the chance to return to the company “at a higher level” than the one she had before. The researchers also asked that Kacholia and Jeff Dean, the AI division chief, apologise to Gebru for their treatment of her. It also calls for the company to issue a public commitment to academic integrity and to establish racial literacy training for management.

This is the latest worker uprising at Google. In 2018, thousands of employees walked out and some of them sent the company a list of reform demands, including placing an employee representative on the board. While the company made some changes, such as no longer forcing employees to arbitrate workplace claims, the bulk of those demands remain unmet.

© 2020 Bloomberg LP


Is MacBook Air M1 the portable beast of a laptop that you always wanted? We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts, Google Podcasts, or RSS, download the episode, or just hit the play button below.

Source link

0
Continue Reading

Technology

Skullcandy Jib True TWS Earbuds With Up to 22 Hours of Battery Life Launched in India

skullcandy jib true earbuds image 1610975799734

Skullcandy has launched the Jib True truly wireless (TWS) earbuds with up to 22 hours of battery life. The new earbuds by the American company comes in an IPX4 sweat- and water-resistant build. The earbuds also come with dual microphones that are touted to help provide an enhanced voice calling experience — alongside enabling wireless music playback. The Skullcandy Jib True earbuds can also work solo. This means that you can either wear the pair or just one earpiece to save some battery. The Skullcandy Jib True will compete against the OnePlus Buds Z earbuds that are identically priced in the country. The earbuds are also likely to take on the likes of the Redmi Earbuds S and Realme Buds Q that are popular in the affordable TWS segment.

Skullcandy Jib True price in India

Skullcandy Jib True price in India has been set at Rs. 2,999. The earbuds come in Blue and True Black colour options and are available for purchase through the Skullcandy website.

Skullcandy Jib True specifications

The Skullcandy Jib True earbuds feature 40mm drivers with an impedance of 32 ohms and a frequency response of 20Hz-20kHz. There are controls to adjust volume, skip tracks, take voice calls, or activate Google Assistant or Siri, all without taking out the connected phone. The company has also used a noise-isolating fit that comes through a silicone tip. The earbuds come with Bluetooth v5.0 connectivity and are compatible with both Android devices and the iPhone.

In terms of battery life, the Skullcandy Jib True earbuds can last for six hours on a single charge, while the bundled case provides additional 16 hours of usage. This brings the total 22 hours of battery life, which is two hours more than the 20-hour usage promised by the OnePlus Buds Z. However, unlike the OnePlus offering, Skullcandy hasn’t defined whether it has provided any fast charging support on the Jib True. The new earbuds weigh 228 grams.


What will be the most exciting tech launch of 2021? We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts, Google Podcasts, or RSS, download the episode, or just hit the play button below.

For the biggest CES 2021 stories and latest updates, visit our CES hub.

1610996218 662 Skullcandy Jib True TWS Earbuds With Up to 22 Hours

Ambrane NeoBuds 11, NeoBuds 22 True Wireless Earphones With Voice Assistant, Bluetooth v5.0 Launched



Source link

0
Continue Reading

Technology

These Microsoft Azure tools can help you unlock the secrets lurking in your business data

azure synapse header

How to develop business insights from big data using Microsoft’s Azure Synapse and Azure Data Lakes technologies.

Image: Microsoft

Data lakes are an important part of a modern data analysis environment. Instead of importing all your different data sources into one data warehouse, with the complex task of building import pipelines for relational, non-relational and other data, and of trying to normalise all that data against your choice of keys, you wrap all your data in a single storage environment. On top of that storage pool, you can start to use a new generation of query tools to explore and analyse that data, working with what could be petabytes of data in real time. 

SEE: Windows 10 Start menu hacks (TechRepublic Premium)

Using data this way makes it easier to work with rapidly changing data, getting insights quickly and building reporting environments that can flag up issues as they arise. By wrapping data in one environment, you can take advantage of common access control mechanisms, applying role-based authentication and authorisation, ensuring that the right person gets access to the right data, without leaking it to the outside world. 

Working at scale with Azure Data Lake 

Using tools like Azure Active Directory and Azure Data Lake, you can significantly reduce the risk of a breach as it taps into the Microsoft Security Graph, identifying common attack patterns quickly. 

Once your data is in an Auzre Data Lake store, then you can start to run your choice of analytics tooling over it, using tools like Azure Databricks, the open-source HDInsight, or Azure’s Synapse Analytics. Working in the cloud makes sense here, as you can take advantage of large-scale Azure VM instances to build in-memory models as well as taking advantage of scalable storage to build elastic storage pools for your data lake contents. 

Microsoft recently released a second generation of Data Lake Storage, building on Azure Blobs to add disaster recovery and tiered storage to help you manage and optimise your storage costs. Azure Data Lake Storage is designed to work with gigabits of data throughput. A hierarchical namespace makes working with data easier, using directories to manage your data. And as you’re still using a data lake with many different types of data, there’s still no need for expensive and slow ETL-based transformations. 

Analysing data in Azure Synapse 

Normally you need separate analytics tooling for different types of data. If you’re building tooling to work with your own data lake, you’re often bringing together data-warehousing applications alongside big data tools, resulting in complex and often convoluted query pipelines that can be hard to document and debug. Any change in the underlying data model can be catastrophic, thanks to fragile custom analysis environments. 

Azure now offers an alternative, hybrid analytical environment in the shape of Azure Synapse Analytics, which brings together big data tooling and relational queries in a single environment by mixing SQL with Apache Spark and providing direct connections to Azure data services and to the Power Platform. It’s a combination that allows you to work at global scale while still supporting end-user visualisations and reports, and at the same time providing a platform that supports machine-learning techniques to add support for predictive analytics. 

At its heart, Synapse removes the usual barriers between standard SQL queries and big data platforms, using common metadata to work with both its own SQL dialect and Apache Spark on the same data sets, either relational tables or other stores, including CSV and JSON. It has its own import tooling that will import data into and out of data lakes, with a web-based development environment for building and exploring analytical models that go straight from data to visualisations. 

Synapse creates a data lake as part of its setup, by default using a second-generation BLOB-based instance. This hosts your data containers, in a hierarchical virtual file system. Once the data lake and associated Synapse workspace are in place, you can use the Azure Portal to open the Synapse Studio web-based development environment. 

azure-synapse-studio.jpg

Writing a PySpark query in a Spark (Scala) notebook in Azure Synapse Studio.

Image: Microsoft

Building analytical queries in Synapse Studio 

Synapse Studio is the heart of Azure Synapse Analytics, where data engineers can build and test models before deploying them in production. SQL pools manage connections to your data, using either serverless or dedicated connections. While developing models, it’s best to use the built-in serverless pool; once you’re ready to go live you can provision a dedicated pool of SQL resources that can be scaled up and down as needed. However, it’s important to remember that you’re paying for those resources even if they’re not in use. You can also set up serverless pools for Apache Spark, helping keep costs to a minimum for hybrid queries. There is some overhead when launching serverless instances, but for building reports as a batch process, that shouldn’t be an issue. 

Azure Synapse is fast: building a two-million row table takes just seconds. You can quickly work with any tabular data using familiar SQL queries, using the Studio UI to display results as charts where necessary. That same data can be loaded from your SQL store into Spark, without writing any ETL code for data conversion. All you need to do is create a new Spark notebook, and then create the database and import it from your SQL pool. Data from Spark can be passed back to the SQL pool; allowing you to use Spark to manipulate data sets for further analysis. You can use SQL queries on Spark datasets directly, simplifying what could otherwise be complex programming tasks unifying results from different platforms. 

SEE: Checklist: Securing Windows 10 systems (TechRepublic Premium)

One useful feature of Azure Data Lakes using Gen 2 storage is the ability to link to other storage accounts, allowing you to quickly work with other data sources without having to import them into your data lake store. Using Azure Synapse Studio, your queries are stored in notebooks. These notebooks can be added to pipelines to automate analysis. You can set triggers to run an analysis at set intervals, driving Power BI-based dashboards and reports. 

There’s a lot to explore with Synapse Studio, and to get the most from it requires plenty of data-engineering experience. It’s not a tool for beginners or for end users: you need to be experienced in both SQL-based data-warehousing techniques and in tools like Apache Spark. However, it’s the combination of those tools and the ability to publish results in desktop analytical tools like Power BI that makes it most useful. 

The cost of at-scale data lake analysis will always make it impossible to bring to everyone. But using a single environment to create and share analyses should go a long way towards unlocking the utility of business data. 

Also see

Source link

0
Continue Reading

Technology

Moonshots for the Treatment of Aging: Less Incrementalism, More Ambition

1 16

There is far too much incrementalism in the present research and development of therapies to treat aging. Much of the field is engaged in mimicking calorie restriction or repurposing existing drugs that were found to increase mouse life span by a few percentage points. This will not meaningfully change the shape of human life, but nonetheless costs just as much as efforts to achieve far more.

If billions of dollars and the efforts of thousands of researchers are to be devoted to initiatives to treat aging, then why not pursue the ambitious goal of rejuvenation and adding decades to healthy life spans? It is just as plausible.

Moonshots for the Treatment of Aging Less Incrementalism More Ambition

Image credit: Pixabay (Free Pixabay license)

There are just as many starting points and plausible research programs aimed at outright rejuvenation via repair of molecular damage, such as those listed in the SENS approach to aging, as there are aimed at achieving only small benefits in an aged metabolism. The heavy focus on incremental, low yield programs of research and development in the present community is frustrating, and that frustration is felt by many.

As the global population ages, there is increased interest in living longer and improving one’s quality of life in later years. However, studying aging – the decline in body function – is expensive and time-consuming. And despite research success to make model organisms live longer, there still aren’t really any feasible solutions for delaying aging in humans. With space travel, scientists and engineers couldn’t know what it would take to get to the moon. They had to extrapolate from theory and shorter-range tests. Perhaps with aging, we need a similar moonshot philosophy. Like the moon once was, we seem a long way away from provable therapies to increase human healthspan or lifespan. This review therefore focuses on radical proposals. We hope it might stimulate discussion on what we might consider doing significantly differently than ongoing aging research.

A less than encouraging sign for many of the lifespan experiments done in preclinical models, namely in mammals such as mice, is that they have modest effect sizes, often only having statistically significant effects in one of the genders, and often only in specific dietary or housing conditions. Even inhibiting one of the most potent and well-validated aging pathways, the mechanistic target of rapamycin (mTOR) pathway has arguably modest effects on lifespan – a 12-24% increase in mice. This is all to ask, if the mTOR inhibitor rapamycin is one of the potential best-case scenarios and might be predicted to have a modest effect if any (and possibly a detrimental one) in people, should it continue to receive so much focus by the aging community? Note the problems in the aging field with small and inconsistent effects for the leading strategies aren’t specific to rapamycin.

Treating individual aging-related diseases has encountered roadblocks that should also call into question whether we are on the optimal path for human aging. Alzheimer’s is a particularly well-funded and well-researched aging-related topic where there are still huge gaps in our understanding and lack of good treatment options. There has been considerable focus on amyloid beta and tau, but targeting those molecules hasn’t done much for Alzheimer’s so far, leaving many searching for answers. The point is when we spend collectively a long time on something that isn’t working well, such as manipulating a single gene or biological process, it should seem natural to consider conceptually different approaches.

Link: https://doi.org/10.3233/NHA-190064

Source: Fight Aging!




Source link

0
Continue Reading

Trending