Along with its 11th Gen Core CPU family based on the Tiger Lake architecture, Intel has launched the Intel Evo platform, a new pubic-facing name and branding opportunity for its Project Athena initiative that aims to fine-tune the performance and experience delivered by premium ultraportable laptops. The new badge replaces the clunky previous “Engineered for Mobile Performance” sticker, and echoes the once-popular Intel Centrino branding initiative. Intel will cooperate with laptop OEMs and the entire manufacturing ecosystem to deliver Intel Evo-badged laptops, which will then be promoted under that label.
Project Athena, now in its second generation, is a set of guidelines and specifications that define a laptop’s user experience parameters, which defines what components and design elements are used. According to Intel, this initiative improves all the tech integrated into the a laptop, resulting in miniaturisation, better performance and longer battery life. Examples of features that can be expected from laptops bearing the Evo badge are edge-to-edge displays, compact antennas, faster Wi-Fi, and lighter bodies.
Intel says it is working with more than 150 partners including laptop manufacturers, component vendors, OEMs and ODMs, and software partners. There have already been more than 50 verified designs for the first generation, across Windows and Chrome OS, in the consumer and commercial markets.
To meet the requirements of the Intel Evo platform, laptops will have to achieve specific targets based on common workloads running in real-world conditions. Intel says it uses several performance and responsiveness tests designed around actual usage scenarios that involve creating, collaborating, browsing, and streaming. Performance must be consistent whether running on battery power or mains, multitasking must be smooth, and responsiveness must be consistent.
Intel Evo laptops must wake from standby in less than one second, deliver 9 hours or more of battery life even with a large and bright, display, and deliver at least 4 hours of battery life with a 30 minute charge. They will also feature Thunderbolt 4, Wi-Fi 6, precision touchpads, pen and voice capabilities, high-quality webcams, and premium audio and video.
Along with this new platform, Intel has unveiled a fresh new brand identity and corporate logo, which will roll out across its products and communications. The new look features square patterns and bright colours, and ditches the elliptical swoosh around the Intel wordmark. This is only the third major change to Intel’s corporate identity in its history, as explained by Senior Vice President and Chief Marketing Officer Karen Walker during the company’s live-streamed Tiger Lake launch event.
Google Pixel 4a Review | NDTV Gadgets 360
After the disappointing launch price of the Pixel 3a in India last year, and the decision to not launch the Pixel 4, there has been little reason to get excited about the new models launched this year. Google is not launching the Pixel 5 or the Pixel 4a 5G in India, at least not yet, but it has launched the Pixel 4a. The most affordable member of this year’s Pixel series, the Pixel 4a is priced a lot more aggressively this time around in India, at Rs. 31,999.
This year, Google is keeping things simple. There’s just one version of the Pixel 4a, so no XL option. It’s also available in only one configuration, with 6GB of RAM and 128GB of storage, and in only one colour – Just Black. I have spent a lot of time with it following my initial impressions a few weeks ago, and now it’s time to see if Google has done enough this year to get people interested again.
Google Pixel 4a design
There’s something very likeable about the Google Pixel 4a’s design. It’s not flashy or in-your-face; in fact it’s the exact opposite and yet it looks attractive. Google has used a unibody polycarbonate shell with a soft-touch matte finish. It looks nice and doesn’t attract fingerprints. The Pixel 4a is relatively slim at 8.2mm and really light, at just 143g. The overall compact dimensions of the body and the rounded edges make it a very comfortable phone to handle.
The volume and power buttons are placed on the right, and offer good tactile feedback. There’s a headphone jack on the top, a tray for a single Nano-SIM on the left, and the speaker and USB Type-C port on the bottom. The Google Pixel 4a only accepts a single physical SIM, but it does support an additional eSIM.
The back has a capacitive fingerprint sensor, so there’s no in-display sensor despite this phone having an OLED panel. This isn’t a big deal, as the fingerprint sensor works very well and can be used to pull down the notification shade with a swipe gesture. However, there’s no option for face recognition on the Pixel 4a.
Google has thankfully ditched the massive bezel of the previous generation for much narrower ones on the Pixel 4a. The borders are still a bit thick but they’re more or less even all around the display. You get a hole-punch cutout for the selfie camera. The display is a bit larger than that of the Pixel 3a, measuring 5.8 inches diagonally. It’s an OLED panel with a full-HD+ resolution. It supports HDR10 playback and is made using Gorilla Glass 3 for scratch protection.
One feature that’s missing compared to last year’s model is the Active Edge sensors. On previous Pixel phones, you used to be able to activate Google Assistant by squeezing the pressure-sensitive side panels. On the other hand, Google has kept the Now Playing feature, which automatically recognises songs being played in the background and displays the title and artist on your lockscreen or always-on display.
In the retail box of the Google Pixel 4a, you’ll find an 18W Type-C charger, a USB Type-C to Type-C cable, a Quick Switch adapter for importing data from an older phone, a SIM tool, and documentation. You don’t get any case or headset.
Google Pixel 4a performance and battery life
The Google Pixel 4a uses the Qualcomm Snapdragon 730G SoC, which is not the most powerful SoC you’ll find in phones at this price, but is good enough. There’s 6GB of LPDDR4X RAM and 128GB of storage, which again, are fairly adequate. The Pixel 4a supports 4G VoLTE, dual-band Wi-Fi ac, Bluetooth 5, NFC, and four satellite navigation systems. There’s no wireless charging or IP rating, but you do get stereo speakers. The Pixel 4a also features Google’s Titan M security hardware for biometric authentication and other security-related functions.
For software, units in the market at the time of the India launch are running Android 10 out of the box, but a final Android 11 update is available. My review unit was already running Android 11 when I began using it. If you’ve used a Pixel smartphone before, you know what to expect. The interface is completely clean, with no bloatware and just the essential Google apps preinstalled. There’s a Personal Safety app from Google which lets you set up emergency contacts, etc. There’s a Pixel Tips app to help first-time Pixel users get acquainted with their smartphone.
Google has incorporated some basic gestures, which can be found in the Settings app. You can enable gestures to quickly access the camera, silence an incoming call, etc. Being a Pixel phone, Google offers a minimum of three years of OS and security updates.
The relatively powerful hardware combined with Google’s lean software makes the usage experience wonderful. Unlocking the phone with the fingerprint sensor is quick, the interface is snappy, and the always-on display is great for peeking at the time or unread alerts. Google Assistant is speedy too, be it transcribing what you just said or fetching search results. The Pixel 4a unfortunately misses out on a higher refresh rate display, even 90Hz, which would have made the experience even better.
I found the display to be pretty good for watching content on. Colours are vivid, blacks are deep, and text is generally sharp. The screen gets very bright too but whites look a bit murky even at full brightness. This is especially noticeable when compared side by side with something like the OnePlus Nord, which is in the same price segment. HDR content looks good, whether played locally or through streaming apps. The stereo speakers sound good and get decently loud, although the bottom-firing one is a bit louder than the earpiece.
Gaming was also enjoyable. Everything from simple titles such as Mars: Mars, to heavier ones such as Call of Duty: Mobile ran smoothly. I didn’t feel any heating issues either, other than the side of the frame getting a bit warm.
The Google Pixel 4a has a 3,140mAh battery, which is a modest capacity by 2020 standards. Unsurprisingly, it didn’t fare too well in our HD video battery loop test, running for a little more than twelve and a half hours. However, I am happy to report that with medium to light real-world usage, I was able to make the Pixel 4a last for one full day on a single charge. On days with lots of camera usage and video watching, it did drain a bit faster, so if you’re expecting a phone that can last more than a day, you might be a little disappointed.
The Pixel 4a can fast-charge its battery with the bundled 18W adapter to about 52 percent in half an hour, and up to 88 percent in an hour. It took about 15-20 minutes more to reach full capacity. Since it uses the USB Power Delivery (PD) standard, you can use any Type-C PD charger to quickly charge the Pixel 4a.
Google Pixel 4a cameras
The Google Pixel 3a had an impressive set of cameras, not just for its segment, but in general. The Pixel 4a sticks to a single front and rear camera, with the same resolutions as their predecessors. The rear camera has a 12.2-megapixel sensor and an f/1.7 aperture, dual-pixel PDAF, and optical stabilisation. The front camera uses an 8-megapixel sensor and has an f/2.0 aperture. Sadly, there isn’t a physical ultra-wide-angle rear camera like you get on the 5G variant of the Pixel 4a, and on most other phones at this price level now.
However, Google hasn’t skimped on camera features in software, which are mostly the same as what you’d get with the flagship Pixel 5. There’s Night Sight, Top Shot, Super Res Zoom, Motion Autofocus, and Live HDR+. Frequent Faces is a feature, which when enabled, is said to recognise and recommend shots that are focused on specific faces that you capture often, when selecting a Top Shot or Motion Photo. When shooting stills, the Google Pixel 4a lets you tweak the exposure and shadows independently before taking a shot, and even shows you the effects of each adjustment in real-time in the viewfinder. For videos, you can manually adjust the exposure too, and tapping the viewfinder once will begin focus tracking.
The camera app has nearly all the shooting modes one would expect. There’s no manual mode, but you can enable RAW capture through the Settings menu.
Landscape photos shot during the day looked stunning. The Google Pixel 4a managed to capture natural-looking colours and well-balanced exposures. Details were fairly good, but when magnified, I noticed a bit of noise, and finer textures and edges didn’t have very good definition. Close-up shots had very good details, rich colours, and a pleasing background blur. In Portrait mode, I could digitally zoom in up to 4x. Portrait shots generally looked striking with good edge detection, details, and colours.
The Pixel 4a did an equally good job with low-light photos. Even without Night Sight, images looked clean with minimal noise, colours were vivid, and details were well defined. Night Sight helps correct the exposure a bit, and in very dark scenes, it can make an impactful difference.
Pixel smartphones have thus far been very good for selfies, and that continues. Selfies shot in daylight pack in very good detail, and thanks to the wide field of view, you can get quite a bit of the background in the frame. Portrait mode works well for selfies too. In low light, Night Sight makes a big difference to the type of photos you can capture. When used in combination with the screen flash (which is more of a fill-light than a flash), the results are even better.
The Google Pixel 4a can shoot up to 4K video at 30fps. During the day, I found the quality and stabilisation to be very good. Videos captured with the selfie camera are also electronically stabilised. Even in low light, video quality is pretty decent, with good exposure and a tolerable amount of shimmer when you walk.
I really wish Google had included an ultra-wide-angle camera, as that would have made the setup pretty much perfect. Even so, both cameras on the Pixel 4a deliver consistent and reliable results.
Verdict: Should you buy the Pixel 4a?
The Google Pixel 4a is being sold on Flipkart at a promotional price of Rs. 29,999, which is a bit lower than its official retail price of Rs. 31,999. I think it’s a good buy at this price for anyone looking to capture good photos and video with their smartphone. Unlike last year’s Pixel 3a, the Pixel 4a isn’t crippled too much in terms of processing power. It features a good SoC as well as enough RAM and storage to offer a decent gaming performance. Battery life might not be as good as what the competition achieves, but despite its small capacity, you should expect this phone to last nearly a full day on average.
The OnePlus Nord is a very tempting competitor to the Google Pixel 4a, and it manages to one-up this phone in almost all areas, on paper anyway. So which one should you buy? That’s a discussion for another article, coming up very soon.
Artificial intelligence: Cheat sheet – TechRepublic
Learn artificial intelligence basics, business use cases, and more in this beginner’s guide to using AI in the enterprise.
Artificial intelligence (AI) is the next big thing in business computing. Its uses come in many forms, from simple tools that respond to customer chat, to complex machine learning systems that predict the trajectory of an entire organization. Popularity does not necessarily lead to familiarity, and despite its constant appearance as a state-of-the-art feature, AI is often misunderstood.
In order to help business leaders understand what AI is capable of, how it can be used, and where to begin an AI journey, it’s essential to first dispel the myths surrounding this huge leap in computing technology. Learn more in this AI cheat sheet. This article is also available as a download, Cheat sheet: Artificial intelligence (free PDF).
What is artificial intelligence?
When AI comes to mind, it’s easy to get pulled into a world of science-fiction robots like Data from Star Trek: The Next Generation, Skynet from the Terminator series, and Marvin the paranoid android from The Hitchhiker’s Guide to the Galaxy.
The reality of AI is nothing like fiction, though. Instead of fully autonomous thinking machines that mimic human intelligence, we live in an age where computers can be taught to perform limited tasks that involve making judgments similar to those made by people, but are far from being able to reason like human beings.
Modern AI can perform image recognition, understand the natural language and writing patterns of humans, make connections between different types of data, identify abnormalities in patterns, strategize, predict, and more.
All artificial intelligence comes down to one core concept: Pattern recognition. At the core of all applications and varieties of AI is the simple ability to identify patterns and make inferences based on those patterns.
SEE: Artificial intelligence: A business leader’s guide (free PDF) (TechRepublic)
AI isn’t truly intelligent in the way we define intelligence: It can’t think and lacks reasoning skills, it doesn’t show preferences or have opinions, and it’s not able to do anything outside of the very narrow scope of its training.
That doesn’t mean AI isn’t useful for businesses and consumers trying to solve real-world problems, it just means that we’re nowhere close to machines that can actually make independent decisions or arrive at conclusions without being given the proper data first. Artificial intelligence is still a marvel of technology, but it’s still far from replicating human intelligence or truly intelligent behavior.
What can artificial intelligence do?
AI’s power lies in its ability to become incredibly skilled at doing the things humans train it to. Microsoft and Alibaba independently built AI machines capable of better reading comprehension than humans, Microsoft has AI that is better at speech recognition than its human builders, and some researchers are predicting that AI will outperform humans in most everything in less than 50 years.
That doesn’t mean those AI creations are truly intelligent–only that they’re capable of performing human-like tasks with greater efficiency than us error-prone organic beings. If you were to try, say, to give a speech recognition AI an image-recognition task, it would fail completely. All AI systems are built for very specific tasks, and they don’t have the capability to do anything else.
Since the COVID-19 pandemic began in early 2020, artificial intelligence and machine learning has seen a surge of activity as businesses rush to fill holes left by employees forced to work remotely, or those who’ve lost jobs due to the financial strain of the pandemic.
The quick adoption of AI during the pandemic highlights another important thing that AI can do: Replace human workers. According to Gartner, 79% of businesses are currently exploring or piloting AI projects, meaning those projects are in the early post-COVID-19 stages of development. What the pandemic has done for AI is cause a shift in priorities and applications: Instead of focusing on financial analysis and consumer insight, post-pandemic AI projects are focusing on customer experience and cost optimization, Algorithmia found.
Like other AI applications, customer experience and cost optimization are based on pattern recognition. In the case of the former, AI bots can perform many basic customer service tasks, freeing employees up to only address cases that need human intervention. AI like this has been particularly widespread during the pandemic, when workers forced out of call centers put stress on the customer service end of business.
What are the business applications of artificial intelligence?
Modern AI systems are capable of amazing things, and it’s not hard to imagine what kind of business tasks and problem solving exercises they could be suited to. Think of any routine task, even incredibly complicated ones, and there’s a possibility an AI can do it more accurately and quickly than a human–just don’t expect it to do science fiction-level reasoning.
In the business world, there are plenty of AI applications, but perhaps none is gaining traction as much as business analytics and its end goal: Prescriptive analytics.
Business analytics is a complicated set of processes that aim to model the present state of a business, predict where it will go if kept on its current trajectory, and model potential futures with a given set of changes. Prior to the AI age, analytics work was slow, cumbersome, and limited in scope.
SEE: Special report: Managing AI and ML in the enterprise (ZDNet) | Download the free PDF version (TechRepublic)
When modeling the past of a business, it’s necessary to account for nearly endless variables, sort through tons of data, and include all of it in an analysis that builds a complete picture of the up-to-the-present state of an organization. Think about the business you’re in and all the things that need to be considered, and then imagine a human trying to calculate all of it–cumbersome, to say the least.
Predicting the future with an established model of the past can be easy enough, but prescriptive analysis, which aims to find the best possible outcome by tweaking an organization’s current course, can be downright impossible without AI help.
SEE: Artificial intelligence ethics policy (TechRepublic Premium)
There are many artificial intelligence software platforms and AI machines designed to do all that heavy lifting, and the results are transforming businesses: What was once out of reach for smaller organizations is now feasible, and businesses of all sizes can make the most of each resource by using artificial intelligence to design the perfect future.
Analytics may be the rising star of business AI, but it’s hardly the only application of artificial intelligence in the commercial and industrial worlds. Other AI use cases for businesses include the following.
- Recruiting and employment: Human beings can often overlook qualified candidates, or candidates can fail to make themselves noticed. Artificial intelligence can streamline recruiting by filtering through larger numbers of candidates more quickly, and by noticing qualified people who may go overlooked.
- Fraud detection: Artificial intelligence is great at picking up on subtle differences and irregular behavior. If trained to monitor financial and banking traffic, AI systems can pick up on subtle indicators of fraud that humans may miss.
- Cybersecurity: Just as with financial irregularities, artificial intelligence is great at detecting indicators of hacking and other cybersecurity issues.
- Data management: Using AI to categorize raw data and find relations between items that were previously unknown.
- Customer relations: Modern AI-powered chatbots are incredibly good at carrying on conversations thanks to natural language processing. AI chatbots can be a great first line of customer interaction.
- Healthcare: Not only are some AIs able to detect cancer and other health concerns before doctors, they can also provide feedback on patient care based on long-term records and trends.
- Predicting market trends: Much like prescriptive analysis in the business analytics world, AI systems can be trained to predict trends in larger markets, which can lead to businesses getting a jump on emerging trends.
- Reducing energy use: Artificial intelligence can streamline energy use in buildings, and even across cities, as well as make better predictions for construction planning, oil and gas drilling, and other energy-centric projects.
- Marketing: AI systems can be trained to increase the value of marketing both toward individuals and larger markets, helping organizations save money and get better marketing results.
If a problem involves data, there’s a good possibility that AI can help. This list is hardly complete, and new innovations in AI and machine learning are being made all the time.
What AI platforms are available?
When adopting an AI strategy, it’s important to know what sorts of software are available for business-focused AI. There are a wide variety of platforms available from the usual cloud-hosting suspects like Google, AWS, Microsoft, and IBM, and choosing the right one can mean the difference between success and failure.
AWS Machine Learning offers a wide variety of tools that run in the AWS cloud. AI services, pre-built frameworks, analytics tools, and more are all available, with many designed to take the legwork out of getting started. AWS offers pre-built algorithms, one-click machine learning training, and training tools for developers getting started in, or expanding their knowledge of AI development.
Google Cloud offers similar AI solutions to AWS, as well as having several pre-built total AI solutions that organizations can (ideally) plug into their organizations with minimal effort. Google’s AI offerings include the TensorFlow open source machine learning library.
Microsoft’s AI platform comes with pre-generated services, ready-to-deploy cloud infrastructure, and a variety of additional AI tools that can be plugged in to existing models. Its AI Lab also offers a wide range of AI apps that developers can tinker with and learn from what others have done. Microsoft also offers an AI school with educational tracks specifically for business applications.
Watson is IBM’s version of cloud-hosted machine learning and business AI, but it goes a bit further with more AI options. IBM offers on-site servers custom built for AI tasks for businesses that don’t want to rely on cloud hosting, and it also has IBM AI OpenScale, an AI platform that can be integrated into other cloud hosting services, which could help to avoid vendor lock-in.
Before choosing an AI platform, it’s important to determine what sorts of skills you have available within your organization, and what skills you’ll want to focus on when hiring new AI team members. The platforms can require specialization in different sorts of development and data science skills, so be sure to plan accordingly.
What AI skills will businesses need to invest in?
With business AI taking so many forms, it can be tough to determine what skills an organization needs to implement it.
As previously reported by TechRepublic, finding employees with the right set of AI skills is the problem most commonly cited by organizations looking to get started with artificial intelligence.
Skills needed for an AI project differ based on business needs and the platform being used, though most of the biggest platforms (like those listed above) support most, if not all, of the most commonly used programming languages and skills needed for AI.
SEE: Don’t miss our latest coverage about AI (TechRepublic on Flipboard)
TechRepublic covered in March 2018 the 10 most in-demand AI skills, which is an excellent summary of the types of training an organization should look at when building or expanding a business AI team:
Many business AI platforms offer training courses in the specifics of running their architecture and the programming languages needed to develop more AI tools. Businesses that are serious about AI should plan to either hire new employees or give existing ones the time and resources necessary to train in the skills needed to make AI projects succeed.
How can businesses start using artificial intelligence?
Getting started with business AI isn’t as easy as simply spending money on an AI platform provider and spinning up some pre-built models and algorithms. There’s a lot that goes into successfully adding AI to an organization.
At the heart of it all is good project planning. Adding artificial intelligence to a business, no matter how it will be used, is just like any business transformation initiative. Here is an outline of just one way to approach getting started with business AI.
Determine your AI objective. Figure out how AI can be used in your organization and to what end. By focusing on a narrower implementation with a specific goal, you can better allocate resources.
Identify what needs to happen to get there. Once you know where you want to be, you can figure out where you are and how to make the journey. This could include starting to sort existing data, gathering new data, hiring talent, and other pre-project steps.
Build a team. With an end goal in sight and a plan to get there, it’s time to assemble the best team to make it happen. This can include current employees, but don’t be afraid to go outside the organization to find the most qualified people. Also, be sure to allow existing staff to train so they have the opportunity to contribute to the project.
Choose an AI platform. Some AI platforms may be better suited to particular projects, but by and large they all offer similar products in order to compete with each other. Let your team give recommendations on which AI platform to choose–they’re the experts who will be in the trenches.
Begin implementation. With a goal, team, and platform, you’re ready to start working in earnest. This won’t be quick: AI machines need to be trained, testing on subsets of data has to be performed, and lots of tweaks will need to be made before a business AI is ready to hit the real world.
Plant scientists develop model for identifying lentil varieties best suited to climate change impacts
With demand for lentils growing globally and climate change driving temperatures higher, a University of Saskatchewan (USask)-led international research team has developed a model for predicting which varieties of the pulse crop are most likely to thrive in new production environments.
An inexpensive plant-based source of protein that can be cooked quickly, lentil is a globally important crop for combating food and nutritional insecurity.
But increased production to meet this global demand will have to come from either boosting yields in traditional growing areas or shifting production to new locations, said USask plant scientist Kirstin Bett.
“By understanding how different lentil lines will interact with the new environment, we can perhaps get a leg up in developing varieties likely to do well in new growing locations,” said Bett.
Working with universities and organizations around the globe, the team planted 324 lentil varieties in nine lentil production hotspots, including two in Saskatchewan and one in the United States, as well as sites in South Asia (Nepal, Bangladesh, and India) and the Mediterranean (Morocco, Spain, and Italy).
The findings, published in the journal Plants, People, Planet, will help producers and breeders identify existing varieties or develop new lines likely to flourish in new growing environments—valuable intelligence in the quest to feed the world’s growing appetite for inexpensive plant-based protein.
The new mathematical model is based on a key predictor of crop yield—days to flowering (DTF) which is determined by two factors: day length (hours of sunshine or “photoperiod”) and the mean temperature of the growing environment. Using detailed information about each variety’s interaction with temperature and photoperiod, the simple model can be used to predict the number of days it takes each variety to flower in a specific environment.
“With this model, we can predict which lines they (producers) should be looking at that will do well in new regions, how they should work, and whether they’ll work,” Bett said.
For example, lentil producers in Nepal—which is already experiencing higher mean temperatures as a result of climate change—can use the model to identify which lines will produce high yields if they’re grown at higher altitudes.
Closer to home in Western Canada, the model could be used to predict which varieties should do well in what are currently considered to be marginal production areas.
The project also involved USask plant researchers Sandesh Neupane, Derek Wright, Crystal Chan and Bert Vandenberg.
The next step is putting the new model to work in lentil breeding programs to identify the genes that are controlling lentil lines’ interactions with temperature and day length, said Bett.
Once breeders determine the genes involved, they can develop molecular markers that will enable breeders to pre-screen seeds. That way they’ll know how crosses between different lentil varieties are likely to perform in different production locations.
This research project was part of the Application of Genomics to Innovation in the Lentil Economy (AGILE) program funded by Genome Canada and managed by Genome Prairie. Matching financial support was provided by partners that include the Saskatchewan Pulse Growers, Western Grains Research Foundation, and USask.
- Technology5 months ago
First iPhone jailbreak in four years released
- Technology4 months ago
The Complete Guide for Building a Website
- Technology4 months ago
Check out the new Gaming Leader: Playstation 5
- Space5 months ago
NASA launches its First Space Flight in the U.S since 2011
- Technology3 months ago
Is OnePlus Nord the Best Phone Under Rs. 30,000?
- Politics3 months ago
US Politicians Considering to Ban TikTok App
- Entertainment3 months ago
Grimes Slams Baby Daddy Elon Musk After He Tweets ‘Pronouns Suck’
- Politics3 months ago
Beirut: How judges responded to warnings about ammonium nitrate stored at the Beirut port