This critical Android bug allows malware to masquerade as legitimate apps

Researchers have discovered a serious flaw in the ubiquitous Android operating system that allows malware to masquerade as legitimate applications and deceive users into divulging sensitive data.

Identified by security firm Promon, the malware (dubbed Stranghogg 2.0) infects devices via an illegitimate download and, once onboard, can perform malicious activities via multiple genuine applications.

The malware can also reportedly meddle with application permissions, allowing it to scrape sensitive user data and even track the affected individual’s real-time location.

The vulnerability is present on almost all versions of the Android OS - with the exception of Android 10 (released in September) - accounting for billions of devices.

Android malware

Strandhogg 2.0 functions by manipulating Android’s multi-tasking mechanism, which enables the user to switch seamlessly between applications without having to reboot them each time. 

When a user opens a genuine application, the malware performs a swift hijack and replaces the login page with a rigged overlay, allowing the operators to siphon off any account credentials the user enters.

While the malware does not automatically gain access to all device permissions upon installation, it can also trigger requests to access sensitive data such as messages, photos and location, which the user could then unwittingly approve.

The ability to access both account credentials and SMS messages is a particularly potent combination, because it affords hackers the ability to bypass certain Two-Factor Authentication (2FA) protections used to secure online accounts.

Although Stranghogg 2.0 has the potential to cause serious damage - especially since it is near-impossible to detect - researchers believe the flaw has not been exploited in the wild, a sentiment echoed by Android owner Google.

Promon refrained from publishing any information about the new malware until Google had ample opportunity to develop and issue a fix, to minimise the chances it could be used to mount an attack in the interim.

According to a Google spokesperson, Google Play Protect - the firm’s built-in malware protection service for Android - is now equipped to neutralize Strandhogg 2.0.

While the threat to individual users is reportedly minimal, Android owners are nonetheless advised to update their devices immediately.

Via TechCrunch

Posted in Uncategorised

Apple resolves infuriating iOS bug that prevented users opening apps

Apple has remedied an issue with its Family Sharing system for iOS that prevented many iPhone and iPad users accessing their applications over the weekend.

The iOS bug caused an error alert to appear whenever an affected user attempted to open certain applications, reading: “This app is no longer shared with you”.

Alerts of this kind are typical if changes are made to a user’s access privileges, but in this case apps were rendered inaccessible whether settings had been tinkered with or not.

Some users found the problem could be fixed easily enough by deleting the affected applications via the Settings pane and then reinstalling, but to do this many times over would undoubtedly have been a frustrating affair.

Apple has confirmed a full fix has now been issued, but did not reveal specifics about the cause of the original problem.

iOS app updates

In connection with the bug, many iPhone and iPad users also reported a sudden spike in the number of apps queued for an update over the weekend.

Apple appeared to reissue a number of recent updates for popular applications, forcing some users to reinstall the latest iteration of software already present (and fully-updated) on their devices.

The volume of updates required varied user-by-user, depending on how many apps were installed on the device, but reportedly reached up to 100 in some instances.

Apple supplied no official explanation (and did not respond immediately to our request for clarification at the time) but it now appears the mysterious onslaught of updates was connected with the ongoing development of a fix for the Family Sharing bug.

Now the issue has been resolved, iPhone and iPad owners should be able to access their applications as normal.

Via The Verge and TechCrunch

Posted in Uncategorised

Bundesliga teams up with AWS to give football fans real-time match insights

The German Bundesliga has integrated Amazon Web Services technology into live broadcasts to give football fans real-time insights into the games playing out in front of them.

Amazon’s cloud infrastructure and artificial intelligence (AI) will be deployed to gather player data and generate two forms of insight: Average Positions and Expected Goals (xGoals).

The former tracks the positions each player adopts on the pitch, which AWS claims will provide viewers with insight into their side’s intended playing style. xGoals, meanwhile, is a measure of the probability a player will score a goal with a shot from any given position on the field.

The feature was rolled out for last night’s Borussia Dortmund vs. Bayern Munich tie, known colloquially as Der Klassiker due to the ferocious and long-standing rivalry between the two teams.

xGoals and Average Positions

The Bundesliga - the country’s top division - is the first major football league in Europe to return to action following the coronavirus pandemic, albeit behind closed doors. As such, the eyes of football-starved fans from across the globe are trained firmly on the German top flight.

While the league does not traditionally draw the international attention or acclaim garnered by the English Premier League, for instance, the Bundesliga will hope the new real-time stats will help keep ratings high once rival competitions have resumed.

“AWS is helping the Bundesliga enhance the broadcast viewing experience by delivering deeper insights into the game that didn’t previously exist,” explained Andy Isherwood, Vice President and Managing Director EMEA at AWS.

“With AWS, Bundesliga is able to provide real-time statistics to predict future plays and outcomes. These two new statistics are just the beginning of what we’ll be able to deliver for football fans as we look forward to unlocking new ways to better educate, engage and entertain viewers around the world.”

To assess Average Positions, AWS captures and analyzes information on each player’s average location on the field, then pushes the resulting analysis to viewers at home.

To calculate xGoals, meanwhile, the Bundesliga will lean on Amazon SageMaker - a service designed to build, train and deploy machine learning models. To ensure the greatest possible degree of accuracy, models were trained on data relating to 40,000 shots on goal from previous games, along with an array of positional data.

“We at the Bundesliga are able to use this advanced technology from AWS, including statistics, analytics and machine learning, to interpret the data and deliver more in-depth insight and better understanding of the split-second decisions made on the pitch,” added Andreas Heyden, Executive Vice President of Digital Innovations for the DFL Group.

While last night’s Der Klassiker was selected as the litmus test, Average Positions insights will be available for all future Bundesliga broadcasts, while xGoals will be available for highlight matches only.

Posted in Uncategorised

This new iOS jailbreak tool can unlock even the latest iPhones

Hackers have published a brand new iOS jailbreak tool, capable of unlocking the vast majority of iPhones - including the newest devices.

Published by infamous hacking syndicate Unc0ver, the tool allows iPhone users to bypass Apple’s strict security controls on the kinds of software that can be installed on its devices, as well as customize their phones to a greater extent than is usually allowed.

The new jailbreak works on iPhones that run on the iOS 11 operating system onwards, including devices running iOS 13.5, which was released only days ago.

According to figures published by Apple, 94% of iPhones currently run on iOS 12 or iOS 13, which means the new jailbreaking kit is compatible with nearly all Apple phones in circulation.

New iPhone jailbreak

Apple has traditionally enjoyed a stellar reputation where cybersecurity is concerned, although this status has come under threat in recent months.

In April, security researchers uncovered a serious flaw in Apple’s native Mail application that could allow hackers to scrape personal information without the victim’s knowledge. Exploit acquisition platform Zerodium also recently announced it would not no longer purchase some iPhone flaws, because their value has been affected by the quantity available.

The new jailbreak kit reportedly exploits a zero-day vulnerability in the operating system, discovered by the Unc0ver team. Although the precise nature of the flaw is as yet unknown, the jailbreak vector is expected to be blocked off by Apple sooner or later.

While jailbreaking affords the user access to new functionalities and additional opportunities for customization, the practice also comes with distinct risks. Operating outside of Apple’s security bubble inevitably increases the number of avenues of attack and users also incur risk when downloading apps that have not been vetted by Apple from third-party sources.

For this reason, users are advised to refrain from derestricting their iPhones using a jailbreak tool unless they understand the full scope of potential risks.

Apple did not respond immediately to our request for comment.

Via TechCrunch

Posted in Uncategorised

Is AMD inside Microsoft’s new OpenAI supercomputer? Surprise tweet may imply so

Ed: We reached out to Microsoft to confirm who provided the processors but didn't get any answers. AMD has, meanwhile, issued a rather candid tweet that some say alludes to the fact that Epyc, AMD's server processor, powers the new OpenAI supercomputer. As it would make no sense to do a shoutout if it is actually your rival powering such a halo product. The original story follows below

Microsoft has unveiled a brand new supercomputer designed specifically to train gigantic artificial intelligence (AI) models, the firm announced at its annual Build conference.

Hosted in Microsoft Azure, the system boasts more than 285,000 CPU cores, 10,000 GPUs and 400Gbps of connectivity for each GPU server, placing it among the top five most powerful supercomputers in the world.  

The supercomputer was built in partnership with and will be exclusively used by OpenAI, a San Francisco-based firm dedicated to the ethical implementation of AI.

Microsoft supercomputer

According to a Microsoft blog, the design of its supercomputer was informed by new developments in the field of AI research, which suggest large-scale models could unlock a host of new opportunities.

Developers have traditionally designed specific small-scale AI models for each individual task, such as identifying objects or parsing language. However, the new consensus among researchers is that these objectives can be better achieved via a single colossal model, which digests extraordinary volumes of information.

“This type of model can so deeply absorb the nuances of language, grammar, knowledge, concepts and context that it can excel at multiple tasks: summarizing a lengthy speech, moderating content in live gaming chats...or even generating code from scouring GitHub,” reads the post.

The new supercomputer, according to Microsoft, marks a significant step on the road to training these ultra-large models and making them available as a platform for developers to build upon.

“The exciting thing about these models is the breadth of things they’re going to enable,” said Kevin Scott, Microsoft CTO.

“This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now.”

Beyond its partnership with OpenAI, Microsoft has also developed its own large-scale AI models (referred to as Microsoft Turing Models) under its AI at Scale initiative, designed to improve language understanding across its product suite.

The eventual goal, according to the Redmond giant, is to make these models and its supercomputing resources available to businesses and data scientists via Azure AI services and Github.

Posted in Uncategorised

Playing God: Why artificial intelligence is hopelessly biased – and always will be

Much has been said about the potential of artificial intelligence (AI) to transform many aspects of business and society for the better. In the opposite corner, science fiction has the doomsday narrative covered handily.

To ensure AI products function as their developers intend - and to avoid a HAL9000 or Skynet-style scenario - the common narrative suggests that data used as part of the machine learning (ML) process must be carefully curated, to minimise the chances the product inherits harmful attributes.

According to Richard Tomsett, AI Researcher at IBM Research Europe, “our AI systems are only as good as the data we put into them. As AI becomes increasingly ubiquitous in all aspects of our lives, ensuring we’re developing and training these systems with data that is fair, interpretable and unbiased is critical.”

Left unchecked, the influence of undetected bias could also expand rapidly as appetite for AI products accelerates, especially if the means of auditing underlying data sets remain inconsistent and unregulated.

However, while the issues that could arise from biased AI decision making - such as prejudicial recruitment or unjust incarceration - are clear, the problem itself is far from black and white. 

Questions surrounding AI bias are impossible to disentangle from complex and wide-ranging issues such as the right to data privacy, gender and race politics, historical tradition and human nature - all of which must be unraveled and brought into consideration.

Meanwhile, questions over who is responsible for establishing the definition of bias and who is tasked with policing that standard (and then policing the police) serve to further muddy the waters.

The scale and complexity of the problem more than justifies doubts over the viability of the quest to cleanse AI of partiality, however noble it may be.

What is algorithmic bias?

Algorithmic bias can be described as any instance in which discriminatory decisions are reached by an AI model that aspires to impartiality. Its causes lie primarily in prejudices (however minor) found within the vast data sets used to train machine learning (ML) models, which act as the fuel for decision making. 

Biases underpinning AI decision making could have real-life consequences for both businesses and individuals, ranging from the trivial to the hugely significant.

For example, a model responsible for predicting demand for a particular product, but fed data relating to only a single demographic, could plausibly generate decisions that lead to the loss of vast sums in potential revenue.

Equally, from a human perspective, a program tasked with assessing requests for parole or generating quotes for life insurance plans could cause significant damage if skewed by an inherited prejudice against a certain minority group.

According to Jack Vernon, Senior Research Analyst at IDC, the discovery of bias within an AI product can, in some circumstances, render it completely unfit for purpose.

“Issues arise when algorithms derive biases that are problematic or unintentional. There are two usual sources of unwanted biases: data and the algorithm itself,” he told TechRadar Pro via email.

“Data issues are self-explanatory enough, in that if features of a data set used to train an algorithm have problematic underlying trends, there's a strong chance the algorithm will pick up and reinforce these trends.”

“Algorithms can also develop their own unwanted biases by mistake...Famously, an algorithm for identifying polar bears and brown bears had to be discarded after it was discovered the algorithm based its classification on whether there was snow on the ground or not, and didn't focus on the bear's features at all.”

Vernon’s example illustrates the eccentric ways in which an algorithm can diverge from its intended purpose - and it’s this semi-autonomy that can pose a threat, if a problem goes undiagnosed.

The greatest issue with algorithmic bias is its tendency to compound already entrenched disadvantages. In other words, bias in an AI product is unlikely to result in a white-collar banker having their credit card application rejected erroneously, but may play a role in a member of another demographic (which has historically had a greater proportion of applications rejected) suffering the same indignity.

The question of fair representation

The consensus among the experts we consulted for this piece is that, in order to create the least prejudiced AI possible, a team made up of the most diverse group of individuals should take part in its creation, using data from the deepest and most varied range of sources.

The technology sector, however, has a long-standing and well-documented issue with diversity where both gender and race are concerned.

In the UK, only 22% of directors at technology firms are women - a proportion that has remained practically unchanged for the last two decades. Meanwhile, only 19% of the overall technology workforce are female, far from the 49% that would accurately represent the ratio of female to male workers in the UK.

Among big tech, meanwhile, the representation of minority groups has also seen little progress. Google and Microsoft are industry behemoths in the context of AI development, but the percentage of black and Latin American employees at both firms remains miniscule.

According to figures from 2019, only 3% of Google’s 100,000+ employees were Latin American and 2% were black - both figures up by only 1% over 2014. Microsoft’s record is only marginally better, with 5% of its workforce made up of Latin Americans and 3% black employees in 2018.

The adoption of AI in enterprise, on the other hand, skyrocketed during a similar period according to analyst firm Gartner, increasing by 270% between 2015-2019. The clamour for AI products, then, could be said to be far greater than the commitment to ensuring their quality.

Patrick Smith, CTO at data storage firm PureStorage, believes businesses owe it not just to those that could be affected by bias to address the diversity issue, but also to themselves.

“Organisations across the board are at risk of holding themselves back from innovation if they only recruit in their own image. Building a diversified recruitment strategy, and thus a diversified employee base, is essential for AI because it allows organisations to have a greater chance of identifying blind spots that you wouldn’t be able to see if you had a homogenous workforce,” he said.

“So diversity and the health of an organisation relates specifically to diversity within AI, as it allows them to address unconscious biases that otherwise could go unnoticed.”

Further, questions over precisely how diversity is measured add another layer of complexity. Should a diverse data set afford each race and gender equal representation, or should representation of minorities in a global data set reflect the proportions of each found in the world population?

In other words, should data sets feeding globally applicable models contain information relating to an equal number of Africans, Asians, Americans and Europeans, or should they represent greater numbers of Asians than any other group?

The same question can be raised with gender, because the world contains 105 men for every 100 women at birth.

The challenge facing those whose goal it is to develop AI that is sufficiently impartial (or perhaps proportionally impartial) is the challenge facing societies across the globe. How can we ensure all parties are not only represented, but heard - and when historical precedent is working all the while to undermine the endeavor?

Is data inherently prejudiced?

The importance of feeding the right data into ML systems is clear, correlating directly with AI’s ability to generate useful insights. But identifying the right versus wrong data (or good versus bad) is far from simple.

As Tomsett explains, “data can be biased in a variety of ways: the data collection process could result in badly sampled, unrepresentative data; labels applied to the data through past decisions or human labellers may be biased; or inherent structural biases that we do not want to propagate may be present in the data.”

“Many AI systems will continue to be trained using bad data, making this an ongoing problem that can result in groups being put at a systemic disadvantage,” he added.

It would be logical to assume that removing data types that could possibly inform prejudices - such as age, ethnicity or sexual orientation - might go some way to solving the problem. However, auxiliary or adjacent information held within a data set can also serve to skew output.

An individual’s postcode, for example, might reveal much about their characteristics or identity. This auxiliary data could be used by the AI product as a proxy for the primary data, resulting in the same level of discrimination.

Further complicating matters, there are instances in which bias in an AI product is actively desirable. For example, if using AI to recruit for a role that demands a certain level of physical strength - such as firefighter - it is sensible to discriminate in favor of male applicants, because biology dictates the average male is physically stronger than the average female. In this instance, the data set feeding the AI product is indisputably biased, but appropriately so.

This level of depth and complexity makes auditing for bias, identifying its source and grading data sets a monumentally challenging task.

To tackle the issue of bad data, researchers have toyed with the idea of bias bounties, similar in style to bug bounties used by cybersecurity vendors to weed out imperfections in their services. However, this model operates on the assumption an individual is equipped to to recognize bias against any other demographic than their own - a question worthy of a whole separate debate.

Another compromise could be found in the notion of Explainable AI (XAI), which dictates that developers of AI algorithms must be able to explain in granular detail the process that leads to any given decision generated by their AI model.

“Explainable AI is fast becoming one of the most important topics in the AI space, and part of its focus is on auditing data before it’s used to train models,” explained Vernon.

“The capability of AI explainability tools can help us understand how algorithms have come to a particular decision, which should give us an indication of whether biases the algorithm is following are problematic or not.”

Transparency, it seems, could be the first step on the road to addressing the issue of unwanted bias. If we’re unable to prevent AI from discriminating, the hope is we can at least recognise discrimination has taken place.

Are we too late?

The perpetuation of existing algorithmic bias is another problem that bears thinking about. How many tools currently in circulation are fueled by significant but undetected bias? And how many of these programs might be used as the foundation for future projects?

When developing a piece of software, it’s common practice for developers to draw from a library of existing code, which saves time and allows them to embed pre-prepared functionalities into their applications.

The problem, in the context of AI bias, is that the practice could serve to extend the influence of bias, hiding away in the nooks and crannies of vast code libraries and data sets.

Hypothetically, if a particularly popular piece of open source code were to exhibit bias against a particular demographic, it’s possible the same discriminatory inclination could embed itself at the heart of many other products, unbeknownst to their developers.

According to Kacper Bazyliński, AI Team Leader at software development firm Neoteric, it is relatively common for code to be reused across multiple development projects, depending on their nature and scope.

“If two AI projects are similar, they often share some common steps, at least in data pre- and post-processing. Then it’s pretty common to transplant code from one project to another to speed up the development process,” he said.

“Sharing highly biased open source data sets for ML training makes it possible that the bias finds its way into future products. It’s a task for the AI development teams to prevent from happening.”

Further, Bazyliński notes that it’s not uncommon for developers to have limited visibility into the kinds of data going into their products.

“In some projects, developers have full visibility over the data set, but it’s quite often that some data has to be anonymized or some features stored in data are not described because of confidentiality,” he noted.

This isn’t to say code libraries are inherently bad - they are no doubt a boon for the world’s developers - but their potential to contribute to the perpetuation of bias is clear.

“Against this backdrop, it would be a serious mistake to...conclude that technology itself is neutral,” reads a blog post from Google-owned AI firm DeepMind.

“Even when bias does not originate with software developers, it is still repackaged and amplified by the creation of new products, leading to new opportunities for harm.”

Bias might be here to stay

‘Bias’ is an inherently loaded term, carrying with it a host of negative baggage. But it is possible bias is more fundamental to the way we operate than we might like to think - inextricable from the human character and therefore anything we produce.

According to Alexander Linder, VP Analyst at Gartner, the pursuit of impartial AI is misguided and impractical, by virtue of this very human paradox.

“Bias cannot ever be totally removed. Even the attempt to remove bias creates bias of its own - it’s a myth to even try to achieve a bias-free world,” he told TechRadar Pro.

Tomsett, meanwhile, strikes a slightly more optimistic note, but also gestures towards the futility of an aspiration to total impartiality.

“Because there are different kinds of bias and it is impossible to minimize all kinds simultaneously, this will always be a trade-off. The best approach will have to be decided on a case by case basis, by carefully considering the potential harms from using the algorithm to make decisions,” he explained.

“Machine learning, by nature, is a form of statistical discrimination: we train machine learning models to make decisions (to discriminate between options) based on past data.”

The attempt to rid decision making of bias, then, runs at odds with the very mechanism humans use to make decisions in the first place. Without a measure of bias, AI cannot be mobilised to work for us.

It would be patently absurd to suggest AI bias is not a problem worth paying attention to, given the obvious ramifications. But, on the other hand, the notion of a perfectly balanced data set, capable of rinsing all discrimination from algorithmic decision-making, seems little more than an abstract ideal.

Life, ultimately, is too messy. Perfectly egalitarian AI is unachievable, not because it’s a problem that requires too much effort to solve, but because the very definition of the problem is in constant flux.

The conception of bias varies in line with changes to societal, individual and cultural preference - and it is impossible to develop AI systems within a vacuum, at a remove from these complexities.

To be able to recognize biased decision making and mitigate its damaging effects is critical, but to eliminate bias is unnatural - and impossible.

Posted in Uncategorised

Beware calls from unknown numbers – this top messaging app has placed millions of iOS and Android users at risk

Researchers have identified a critical vulnerability in popular privacy-centric messaging app Signal, affecting millions of iOS and Android users.

Discovered by security firm Tenable, the bug could allow hackers to gain access to users’ coarse location data and map out patterns of movement - such as time-periods during which a user is likely to be at home, work, or their favorite local haunt.

To execute an attack, the hacker need only use Signal to call another user, whose location could be compromised whether or not the call is answered.

The bug was introduced with Signal v4.59.0 on Android, while iOS users of any version since v3.8.0.34 could be at risk.

Signal vulnerability

The Signal messaging app features end-to-end encryption for both calls and text messages, attracting millions of privacy-conscious users every day across Android and iOS. Even infamous whistleblower and champion of data privacy Edward Snowden claims to “use Signal every day.”

However, according to an advisory published by Tenable, the app is not as watertight from a privacy perspective as its users might expect.

The newly discovered flaw can be used to leak information about a user’s DNS, which can in turn reveal coarse location data and allow the hacker to identify the victim’s location within a 400 mile radius. 

While this might appear inconsequential to most, using coarse location data in conjunction with DNS server pings from different networks (domestic Wi-Fi, public hotspots, 4G connections etc.) could be used by the hacker to make more precise location assumptions.

Signal was quick to issue a patch for the vulnerability via GitHub, which Tenable commends in its advisory. However, the security firm believes the patch requires technical expertise beyond the abilities of most users, meaning hackers could abuse the flaw freely until a patch is made available on the Apple App Store and Google Play Store.

In the interim, Tenable recommends Signal users install a VPN service that offers a DNS tunnel, which can hinder an attacker’s ability to exploit the flaw.

Signal did not immediately respond to our request for comment.

Posted in Uncategorised

How to change your Zoom background – and other fun tips

While the ability to change your Zoom background isn’t the video conferencing application’s most essential feature, it’s undoubtedly the most fun.

Available to free and paying users, Zoom backgrounds allow you to trade in your bomb-site bedroom or tired old office for another setting entirely with just a few clicks.

You can choose from a selection of stock options - such as the Golden Gate Bridge or a Caribbean beach - or you can upload an image or video from your own device.

Zoom backgrounds have been especially cherished in recent weeks, with millions of workers forced to rely on Zoom video conferencing to communicate with colleagues as a result of quarantine measures.

While remote working veterans may have the luxury of a dedicated video conferencing space, Zoom backgrounds provide a great alternative for those of us whose home office doesn’t provide the ideal setting for a Monday morning meeting.

To find out more about Zoom video conferencing, how it works and the latest news, check out our guide and how to use Zoom page.

How to use Zoom backgrounds

Activating, configuring and using Zoom backgrounds is simple, requiring only a few steps.

First, ensure the Zoom backgrounds feature is enabled in your account settings. You can do this by accessing your account page via your web browser, navigating to Settings in the left-hand bar, clicking on In Meeting (Advanced) and toggling Virtual Backgrounds to on (the slider will turn blue once the feature has been activated).

To configure your Zoom background, log into the desktop application and click on the settings icon in the top right corner. Under the Virtual Background tab, you can choose from stock options, or upload an image or video via the + icon below the video feed.

Whichever background you select in the settings pane will automatically be applied when you next log into a video conference (which is worth bearing in mind if you’re also using the application to conference with friends in your downtime).

For the best results, it’s important to ensure the video is bright and evenly lit. Zoom backgrounds don’t perform quite as well in partial darkness or glaring light, which can both result in unwanted distortion.

Using a green screen backdrop is ideal, but impractical for many, and a plain background of any colour works perfectly well so long as your clothing isn’t the same colour.

Creating custom Zoom backgrounds

Creating your own custom Zoom background is simple, especially with the newly released tool from Canva.

If you’re a whizz with Photoshop (or any other image editing software), you have the freedom to create any background you like, but Canva provides a simple alternative for those of us not gifted in design.

Canva’s tool allows you to create your own Zoom background using its library of millions of illustrations and icons, and customise the design with a few easy-to-use editing tools.

For those who haven’t used Canva before, there are 80 ready-made templates available, ranging from star sign-themed illustrations to landscape shots. We’re partial to this lemons background ourselves.

The required image dimensions depend on the resolution of the webcam you’re using. If you’re not sure, you can find out using this webcam resolution test.

According to Zoom’s general guidelines, images should have a minimum resolution of 1280x720 pixels, but the higher the resolution the better. Videos, meanwhile, should have a minimum resolution of 1280x720 pixels and a maximum resolution of 1920x1080 pixels.

Free Zoom background images

If you don’t want to go to the trouble of creating your own Zoom background, you could opt for one of the following popular options. Thanks to a few generous companies, you’re spoilt for choice.

You can sit on the Iron Throne from Game of Thrones or ride with Westworld’s finest cowboys and cowgirls, thanks to this complementary selection from HBO.

Both NBC and FOX also tweeted out some fine options, including the couch from The Simpsons, the Bob’s Burgers restaurant from Bob’s Burgers, and Leslie’s office from Parks and Recreation.

The BBC has also released an image gallery of 100 empty television sets, spanning sitcoms, soaps and game shows. The gallery includes sets from fan-favorite shows such as Fawlty Towers, Only Fools and Horses and Doctor Who.

If you’re looking for something a little more business appropriate, image gallery Shutterstock has also provided a pack of free Zoom background images, including this tranquil shot of Mt. Fuji in Autumn.

Searching for Zoom backgrounds (or Zoom virtual backgrounds) on Twitter will also yield hundreds of great options created by talented artists or offered up by IP holders - and plenty of memes too.

More free options:

- Bethesda
- Spongebob’s pineapple
- DC Comics
- The ‘This Is Fine’ meme
- Pixar

The best webcams

If you’re after a new webcam for Zoom video conferencing - or indeed any other kind of video conferencing - these are our top recommendations right now.

Posted in Uncategorised

Zoom outages struck this weekend, but it was quick with a fix

Video conferencing platform Zoom, which has become a crutch for millions of users during the pandemic, suffered a widespread outage this weekend.

Users across the US and UK encountered issues with both hosting and joining meetings on Sunday, with the outage reportedly disrupting all manner of social activities - including virtual church services.

Zoom was quick with a fix, however, managing to resolve the issue within two hours of announcing the investigation via its service status page.

“Users should now be able to host, join and participate in Zoom Meetings and Zoom Video Webinars if they restart their sessions,” read a subsequent status update.

Zoom outage

Zoom has experienced a sharp uptick in user numbers during the pandemic and currently serves over 300 million daily meeting participants. 

Many now rely on the service not only to conduct business during the week, but also to socialize on the weekend, meaning the outage would likely have inconvenienced a significant number of users.

Zoom first recognised the issue with a post to its status page on Sunday at 10AM ET: “Our team is investigating the root cause of issues joining Zoom Meetings. These issues appear to be limited to a subset of users.”

By 11:39AM ET, the video conferencing giant had remedied the problem and published advice for affected users - although it did not disclose the nature of the issue.

“We sincerely apologize for any inconvenience this might have caused,” said a Zoom spokesperson.

The service also experienced a blip in early April, with users in the US and Europe served error messages at login, but has for the most part held strong under the increased load brought about by the pandemic.

Via The Verge 

Posted in Uncategorised

Cloud adoption has soared during to the pandemic, but there’s a catch

As the rapid transition to remote working as a result of lockdown measures has proven, necessity really is the mother of invention.

Practically overnight, businesses that might otherwise have dragged their feet over digital transformation were forced to reinvent the ways in which they work and communicate - and that involved a newfound dependence on cloud-based applications.

“The pandemic put new fuel into the cloud adoption engine. Resistance to change evaporated overnight and now all business applications are in the cloud,” Nico Fischbach, CTO at cybersecurity firm Forcepoint, told TechRadar Pro.

“It’s funny - macro events are much more forceful than any board of directors...and now [due to the pandemic] the Internet has become your corporate environment.”

However, while the acceleration of technology trends can be considered one of the very few positives to come out of the pandemic, the pace with which businesses were forced to innovate inevitably carries a level of risk.

Remote working security

 Under the new remote working regime, enterprise security perimeters expanded by magnitudes almost instantaneously, posing an unprecedented challenge for cybersecurity teams.

The introduction of new endpoints to the corporate network - as a result of bring your own device (BYOD) initiatives - the use of unauthorized communications tools and an influx of phishing attacks are all headaches that have grown more acute since the pandemic began.

“It’s not just bring your own device (BYOD), it’s now bring your own shared device,” Fischbach points out. “Employees are working with company data on devices also used by their children and partners - and that device is likely to be completely unmanaged.”

Further, understanding precisely where business data is held and how it is being used by employees has become far more difficult. Liberated from the watchful eye of the IT department, staff are inclined to take shortcuts (such as transferring data via USB devices or sending information using personal email accounts) that could jeopardise the security of sensitive data.

Pivoting to new modes of operation, according to Fischbach, could prove a significant challenge for security teams, who were for the most part totally blindsided by the transition to remote working.

“For the last decade [security teams] have only been looking at a pyramid of security and everything inside was trusted - their task was only to build more walls around it. Now, the way you have to be wired is very different.”

“It takes a mindset change, understanding and experience to readjust and visualise how the flows of communication and data have changed. Whether teams are equipped to do that remains to be seen.”

And if these issues weren’t enough to deprive security teams of well-earned sleep, Fischbach also believes the most resourceful cybercriminals may have used this period of turbulence to sow the seeds of future attacks. 

“The more organised bad guys - think nation state or well-funded groups - could be using the noise created by the scramble to change network architecture to compromise environments without detection,” he said.

“[These hackers] could fly under the radar and create a pivot point inside organisations that they can use at a later date. I’m pretty sure this has happened.”

Posted in Uncategorised

This is surely the cheapest way to learn a new language from home

Language teaching company Rosetta Stone has just made it a whole lot cheaper to learn a brand new language from home - and what better way to spend your time in lockdown?

The company has many different language courses to choose from, including Spanish, French and German, but also a number of slightly less traditional options such as Filipino, Hebrew and Persian.

The 10-minute lessons are founded on the idea that an interactive approach allows you to learn a new language at a much faster rate and in far greater depth than old-school vocab memorization ever could.

Rosetta Stone's 3-month single language subscriptions are now permanently available for a new everyday price of $11.99 per month, which could be the way to go if you’ve never used a similar service and just want to dip your toes in the water.

However, the firm also recently rolled out an unlimited languages offering available in 12-month and 24-month subscriptions, which have flown off the (virtual) shelves. The year-long package is available for $7.99pm, while the two-year plan is better value for money at $5.99pm, but of course involves a longer commitment.

For the best value, though, the most committed linguists should opt for a lifetime unlimited languages subscription. For a one-off fee of $199.00, you get access to all language courses, Rosetta Stone’s award-winning app, real-time accent feedback via the speech-recognition engine and more.

Unfortunately, one-on-one tutoring sessions are not included in the packages as standard, but can be tacked on at a later date if you like - although this will roughly double the price.

Note, subscriptions automatically renew at the full retail price once your subscription period has ended, so be sure to disable auto-renew if you don’t fancy extending your package.

Posted in Uncategorised

Bitcoin halving sees mining profits slashed for third time

Bitcoin has surpassed a significant technical milestone that saw rewards for mining activity halved for the third time in the cryptocurrency’s 11-year history.

Occurring roughly every four years, a so-called halving (or halvening) event cuts the reward for successfully validating a new “block” in half. The latest halving saw compensation fall from 12.5 to 6.25 bitcoin - or from roughly $110,000 to $55,000 at current market rates.

The highly anticipated halvening was triggered at 19:23 UTC on May 11 with the addition of block 630,000 to the Bitcoin blockchain.

Bitcoin halving

Bitcoin is the world’s first cryptocurrency and the largest today by market capitalization, followed by Ethereum and XRP. 

The Bitcoin halving mechanism is built into the system in order to incrementally reduce the rate at which new coins are minted, thereby slowing progress towards the maximum number of coins that can ever be in circulation: 21 million.

The number of coins currently in existence sits at 18 million, with the cap (the role of which is to simulate scarcity) expected to be reached at some point in the first half of next century.

Since the cryptocurrency’s creation in 2009, halvings have taken place in November 2012 and July 2016 and May 2020 - with the next set to occur in May 2024.

The immediate ramification of the latest halving is that revenue brought in by mining operations will be cut in half, making maintaining the Bitcoin network (by validating transactions and participating in the creation of new blocks) a far less economically attractive proposition.

While larger mining consortia are likely to be able to shoulder the reduction in revenue, the halving is expected to root out smaller miners who are unable to balance the cost of resources with the decreased earnings.

The miners now ejected from the Bitcoin ecosystem are expected to redirect their computing resources towards the maintenance of more lucrative cryptocurrency networks.

The first block to fall under the new reward rate (block 630,001) was reportedly mined by Chinese mining syndicate Antpool, which boasts the greatest total computing power of any mining operation in the world.

Via CoinDesk

Posted in Uncategorised

Here’s why you should never leave anyone alone with your laptop

A flaw in the common Intel Thunderbolt port could allow hackers to break into affected devices in a matter of minutes, researchers have claimed.

The vulnerability is found in millions of Windows and Linux PCs manufactured before 2019 and can be used by an attacker with physical access to the device to circumvent both password protection and hard disk encryption.

Uncovered by security researcher Björn Ruytenberg of the Eindhoven University of Technology, the physical access attack - which he refers to as Thunderspy - can scrape data from the target machine without leaving so much as a trace.

The issue reportedly cannot be resolved via a simple software fix - but only by deactivating the vulnerable port.

Thunderbolt vulnerability

The newly discovered Thunderbolt vulnerability opens the door to what Ruytenberg refers to as an “evil maid attack” - an attack that can be executed if the hacker is afforded time alone with a device.

“All the evil maid needs to do is unscrew the backplate, attach a device momentarily, reprogram the firmware, reattach the backplate, and the evil maid gets full access to the laptop. All of this can be done in under five minutes,” he explained.

According to Ruytenberg, the Thunderspy technique (demonstrated in this video) only requires circa $400 worth of equipment, which can be used to rewrite the Thunderbolt controller’s firmware and override security mechanisms.

The researcher disclosed his findings to Intel in February, as acknowledged by the firm in a recent blog post, in which it also sets out its advice to affected users.

“While the underlying vulnerability is not new and was addressed in operating system releases last year, the researchers demonstrated new potential physical attack vectors using a customized peripheral device,” said the firm.

Intel also stressed that the most widely used operating systems have all introduced Kernal Direct Memory Access (DMA) protection to shield against attacks such as this.

“The researchers did not demonstrate successful DMA attacks against systems with these mitigations enabled. Please check with your system manufacturer to determine if your system has these mitigations incorporated,” the company advised.

Unless you happen to be living with an “evil maid” under quarantine, your device is most likely safe for now. However, Intel has recommended owners of affected devices use only trusted peripherals and do not leave devices unattended for an extended period if possible.

Via WIRED

Posted in Uncategorised

Adult streaming site leaks info on millions of users

Millions of users of a major adult live streaming platforms have had their identities leaked online after the site suffered a massive data breach.

CAM4 suffered a significant incident caused by a server configuration error, making 7TB of user data (comprising 10.88 billion records in total) easily discoverable online, according to security researchers at Safety Detective.

While the misconfigured ElasticSearch database did not betray users’ specific sexual preferences, it did include personally identifiable information including names, email addresses, payment details, chat logs and sexual orientation.

CAM4 data breach

The popular adult platform is used primarily by amateur webcam models to stream explicit content to live audiences. To gain access to premium content or tip performers, users must first register with the site - parting ways with both personal and financial data.

According to the researchers, there is no evidence the breach was caused by a cyberattack or that data was siphoned from the database. However, incidents such as this do form the basis of the main argument against closer regulation of pornographic websites - a project abandoned by the UK over fears user privacy could be compromised in the event of a breach or hack.  

Neither is the timing of the CAM4 breach fortuitous, with traffic to pornography websites through the roof as a result of the coronavirus pandemic. Pornhub, for instance, saw traffic spike by 24.4% in late March, in line with the widespread introduction of lockdown measures.

It is unclear precisely how many CAM4 users were compromised, but analysis suggests records relating to circa 6.6 million US users were present on the server, with Brazilians, Italians and the French also among the most widely represented demographics.

Thankfully, only a few hundred entries revealed both a user’s full name and credit card information - a particularly dangerous combination due to the opportunity for financial fraud.

CAM4 did not immediately respond to our request for comment, but has since secured the vulnerable server.

Via Safety Detectives

Posted in Uncategorised

AWS files another JEDI complaint against Microsoft

Amazon Web Services (AWS) has mounted yet another appeal against the US government’s decision to award Microsoft the $10 billion Pentagon cloud contract, known as JEDI.

Despite an earlier investigation into the procurement process finding no evidence of coercion or interference by US President Donald Trump, AWS has now submitted a fresh appeal.

Unlike the original dispute, Amazon’s latest appeal has been lodged directly with the Department of Defence (DoD) - and its contents remain confidential, unavailable even to Microsoft.

Pentagon JEDI contract

The highly lucrative Joint Enterprise Defence Infrastructure (JEDI) contract is designed to deliver a significant upgrade to the Pentagon’s IT operations and cloud computing capabilities.

The bid was hotly contested by Microsoft, AWS, Google Cloud and others, with the contract ultimately awarded to the Redmond giant in late October - a decision that infuriated Amazon and sparked a subsequent appeal.

In February, Amazon succeeded in having the project frozen until an investigation into the procurement process had been performed, although the probe later conducted by the DoD watchdog uncovered no evidence of foul play.

Eager to begin work on the contract in earnest, Microsoft appears to have reached the end of its tether with the continued delays, pulling no punches in a new blog post authored by the firm’s communications lead.

“This latest filing - filed with the DoD this time - is another example of Amazon trying to bog down JEDI in complaints, litigation and other delays designed to force a do-over to rescue its failed bid,” wrote Frank Shaw, Microsoft’s VP Communications.

“Amazon is at it again, trying to grind this process to a halt, keeping vital technology from the men and women in uniform - the very people Amazon says it supports.”

The war of words between the two cloud giants continued, with an Amazon spokesperson referring to Shaw’s blog post as “posture”.

“Anybody who’s studied the cloud computing space will tell you that AWS has a much more functional, capable, cost-effective and operationally strong offering,” said the spokesperson.

Via Geekwire

Posted in Uncategorised