Quantcast
Channel: Android – googblogs.com
Viewing all 1776 articles
Browse latest View live

How creating an Action can complement your Android app

$
0
0

Posted by Neto Marin - Actions on Google Developer Advocate

There are millions of apps in the Android ecosystem, so helping yours get discovered can require some investment. Your app needs to offer something that differentiates it from other similar apps to stand out to users.

Building a companion Action is a fast and simple way to increase your Android app's potential reach by creating a new entrypoint from devices covered by the Google Assistant. This lets you bring your services to users without needing to install anything through voice, and can bring people into your app when it can provide more value.

Your companion Action complements your Android app's experience by offering some of your services through the Google Assistant, which is available on more than 500 million devices including speakers, phones, cars, headphones, and more. Creating an Action provides a frictionless way for users to start engaging with your services wherever the Google Assistant is available.

Creating an Action for the Assistant will extend your brand presence, bringing your services to new devices and contexts as users interact with the Google Assistant.

Feature what your app does better

It is probably a mistake to try to rewrite all of your Android app as a conversational Action, since voice is a different modality with different constraints and usage patterns. Instead, you should start by selecting the most important or popular features in your app that translate well into a voice context and can be more easily accomplished there. Then, you can create your conversational experience to offer these features on Google Assistant devices. Check out the Conversation design site, which has several articles and guides about how to create a great voice UI.

Let's take a look at a hypothetical example. Imagine you have a mobile commerce app. Some features include searching for products, navigating to different categories, adding payment information, and checking out. You could build an Action for the Assistant with most of the same functionality, but we encourage you to look for what makes the most sense in a conversational experience.

In this case, your Action could focus on everything that a user would want to know after they've purchased a product through your Android app or web page. You could offer a quick way to get updates about a purchase's status (if you provide different states for payment/purchase process) and shipment information, or provide an interface for re-ordering a user's favorite products. Then, your users would be able to ask something like, "Hey Google, ask Voice Store about my last purchase."

Or, to reach users who have never made a purchase before, you could create an Action to offer exciting deals for common products. For example, you could create an Action that is invoked with, "Hey Google, ask Voice Store what are the deals on TVs today".

As you can see, starting with a "hero" use case for your Action is an exciting way to introduce conversational features that complement your Android app, and it will take less time than you think.

At Google I/O 2018, we presented a talk, "Integrating your Android apps with the Google Assistant" which contains more details and examples for developers.

Delivering user's purchases across surfaces

In-app purchases, subscriptions, and one-time products have proven successful for Android developers when it comes to monetization, allowing developers to offer different kinds of digital goods and additional value for paying users. These types of monetization are proven to drive user conversion and make the app more profitable.

Google Play Billing offers a series of tools, APIs, and documentation to help developers manage the subscription life-cycle, build server-side validation, and much more. If you are new to in-app billing, check out the Google Play Billing Overview page.

Now, Android developers can expand where users can access these goods or upgraded experiences by offering them through Actions, as well. This expansion is accomplished by honoring the user's entitlements on Google Play across different surfaces and devices, reaching users when they can't (or don't want to) use an app, like while cooking or driving.

For non-Android platforms, you'll need to ask your users to link their accounts. You can then use your user's account history to identify what purchases they've made on other surfaces.

Check the Accessing Digital Purchases page for a step-by-step guide on how to enable access to the user's purchases and request and parse the purchase data.

What's next?

If you are not familiar with Actions on Google yet, start by checking out our overview page, which describes the platform in detail and tells you all you need to know to create your Actions for the Google Assistant.

Stay tuned for more posts about how to improve your Android app experience with Actions on Google.

Thanks for reading!


Android has created more choice, not less

$
0
0

If you buy an Android phone, you’re choosing one of the world’s two most popular mobile platforms—one that has expanded the choice of phones available around the world.

Today, the European Commission issued a competition decision against Android, and its business model. The decision ignores the fact that Android phones compete with iOS phones, something that 89 percent of respondents to the Commission’s own market survey confirmed. It also misses just how much choice Android provides to thousands of phone makers and mobile network operators who build and sell Android devices; to millions of app developers around the world who have built their businesses with Android; and billions of consumers who can now afford and use cutting-edge Android smartphones.  

Today, because of Android, there are more than 24,000 devices, at every price point, from more than 1,300 different brands, including Dutch, Finnish, French, German, Hungarian, Italian, Latvian, Polish, Romanian, Spanish and Swedish phone makers.

android provides choice

The phones made by these companies are all different, but have one thing in common—the ability to run the same applications. This is possible thanks to simple rules that ensure technical compatibility, no matter what the size or shape of the device. No phone maker is even obliged to sign up to these rules—they can use or modify Android in any way they want, just as Amazon has done with its Fire tablets and TV sticks.

To be successful, open-source platforms have to painstakingly balance the needs of everyone that uses them. History shows that without rules around baseline compatibility, open-source platforms fragment, which hurts users, developers and phone makers. Android’s compatibility rules avoid this, and help make it an attractive long-term proposition for everyone.

Creating flexibility, choice and opportunity

Today, because of Android, a typical phone comes preloaded with as many as 40 apps from multiple developers, not just the company you bought the phone from. If you prefer other apps—or browsers, or search engines—to the preloaded ones, you can easily disable or delete them, and choose other apps instead, including apps made by some of the 1.6 million Europeans who make a living as app developers.

Removing and replacing preloaded apps

In fact, a typical Android phone user will install around 50 apps themselves. Last year, over 94 billion apps were downloaded globally from our Play app store; browsers such as Opera Mini and Firefox have been downloaded more than 100 million times, UC Browser more than 500 million times.

This is in stark contrast to how things used to be in the 1990s and early 2000s—the dial-up age. Back then, changing the pre-installed applications on your computer, or adding new ones, was technically difficult and time-consuming. The Commission’s Android decision ignores the new breadth of choice and clear evidence about how people use their phones today.

A platform built for the smartphone era

In 2007, we chose to offer Android to phone makers and mobile network operators for free. Of course, there are costs involved in building Android, and Google has invested billions of dollars over the last decade to make Android what it is today.  This investment makes sense for us because we can offer phone makers the option of pre-loading a suite of popular Google apps (such as Search, Chrome, Play, Maps and Gmail), some of which generate revenue for us, and all of which help ensure the phone ‘just works’, right out of the box. Phone makers don’t have to include our services; and they’re also free to pre-install competing apps alongside ours. This means that we earn revenue only if our apps are installed, and if people choose to use our apps instead of the rival apps.

Good for partners, good for consumers

The free distribution of the Android platform, and of Google’s suite of applications, is not only efficient for phone makers and operators—it’s of huge benefit for developers and consumers. If phone makers and mobile network operators couldn’t include our apps on their wide range of devices, it would upset the balance of the Android ecosystem. So far, the Android business model has meant that we haven't had to charge phone makers for our technology, or depend on a tightly controlled distribution model.  

We’ve always agreed that with size comes responsibility. A healthy, thriving Android ecosystem is in everyone’s interest, and we’ve shown we’re willing to make changes. But we are concerned that today’s decision will upset the careful balance that we have struck with Android, and that it sends a troubling signal in favor of proprietary systems over open platforms.  

Rapid innovation, wide choice, and falling prices are classic hallmarks of robust competition and Android has enabled all of them. Today’s decision rejects the business model that supports Android, which has created more choice for everyone, not less. We intend to appeal. 

#AndroidWorks

Source: Android


Updating your games for modern Android

$
0
0

Posted by Tom Greenaway, Senior Partner Developer Advocate

Last year we announced that starting from August 2018 Google Play will require all new apps and games to target a recent Android API level – set to API level 26 (Android 8.0 Oreo), or higher. Additionally, this requirement will extend to updates for existing apps and games starting from November 2018.

Every new Android version introduces changes that bring significant security and performance improvements – and enhance the user experience of Android overall. Updating your games to target the latest API level ensures that your users can benefit from these improvements, while still allowing your games to run on older Android versions.

Simple next steps:

  • Install the Android 8.0 Oreo SDK (API level 26) via Android Studio by navigating to (Tools > Android > SDK Manager > Android SDK > SDK Platforms).
  • Update your game to target API level 26 and see whether your game has any incompatibilities or issues as soon as possible. Update any external dependencies as necessary. Learn more about the incremental changes between versions of Android here.
  • If you are using an advertising network, SDK or plugin which is incompatible with API level 26, reach out to your contacts and find out their timeline for supporting target API level 26. The sooner they're aware of these changes the better.
  • If you build your game with Unity, support for target API 26 is built into Unity 5.6.6 and beyond. Simply ensure the latest target API level is selected in your Android build settings for Unity (Build Settings > Android > Player Settings). For versions of Unity 5.6.5 and prior, consult this documentation which includes a workaround for versions dating back to 4.3.
  • For games built with Unreal, check your Android platform settings has the "Target SDK Version" set to 26.
  • If you use Cocos2D-X, check the target API level in the gradle.properties file that is generated.

Significant changes to be aware of:

  • Since API 23, we have required permissions be requested at runtime which helps streamline the app install process.
  • Since API 24, apps can no longer dynamically link against non-NDK libraries. If your app (including third-party static libraries) contains native code, you should only be using public NDK APIs.
  • If your game uses Android push notifications, the Google Play Services SDK in your game will need to be updated to version 10.2.1 or above for your game to support API level 26.
  • If your game uses opaque binary blobs (OBB), then your game must check if it can access the directory before attempting to access the OBB files themselves. We recommend explicitly requesting permission for access using the Runtime Permissions API, and gracefully handling cases wherein the permission is not granted. Additionally, add an entry in the manifest for the external storage access:
    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
    

Moving ahead

Remember, updating the target API level is just the first step – make sure your game is compatible with the behavior changes between your current target API level and API level 26. Check out further guidance on the changes in past versions of Android to help in your migration process. These policy changes are important for moving the Android ecosystem forward and keeping it healthy for our users – and yours.

How useful did you find this blog post?

Final preview update, official Android P coming soon!

$
0
0

Posted By Dave Burke, VP of Engineering

Android P is almost here! As we put the finishing touches on the new platform, today we're bringing you Android P Beta 4.

Beta 4 is the last preview milestone before we launch the official Android P platform later this summer. Take this opportunity to test your apps and publish updates, to make sure you offer a great experience for users transitioning to Android P!

What's in this update?

Today's Beta 4 update includes a release candidate build with final system behaviors and the official Android P APIs (API level 28), available since Beta 2. It includes everything you need to wrap up your testing in time for the upcoming official Android P release.

Get your apps ready for Android P

With the consumer launch coming soon, it's important to test your app for compatibility with Android P. Just install your current app from Google Play on an Android P Beta device or emulator. As you work through the flows, make sure your app runs and looks great, and that it handles the Android P behavior changes properly.

Also watch for uses of non-SDK interfaces in your app. Android P restricts access to selected non-SDK interfaces, so you should reduce your reliance on them. See our recent post for details..

After you've made any necessary updates, we recommend publishing to Google Play right away without changing the app's platform targeting. This lets you ensure a great experience for Android P users while you work on enhancing your app with Android P APIs and targeting.

Enhance your app with Android P features and APIs

When you're ready, dive into Android P and learn about the new features and APIs that you can use in your apps, like multi-camera support, display cutout, enhanced notifications, ImageDecoder, TextClassifier, and others.

To build with the new APIs, just download the official API 28 SDK and tools into Android Studio 3.1, or use the latest version of Android Studio 3.2. Then update your project's compileSdkVersion and targetSdkVersion to API 28. When you change your targeting, make sure your app supports all of the applicable behavior changes.

As soon as you're ready, publish your APK updates that are compiled against, or optionally targeting, API 28. A common strategy is to use Google Play's beta testing feature to get early feedback from a small group of users and then do a staged rollout to production.

Visit the Developer Preview site for details and documentation. Also check out this video and the Google I/O Android playlist for more on what's new in Android P for developers.

How do I get Beta 4?

It's easy - you can get Android P Beta 4 on Pixel devices by enrolling here. If you're already enrolled in our Android Beta program, you'll automatically get the Beta 4 update soon. As always, downloadable system images for Pixel devices are also available. Partners who are participating in the Android P Beta program will also be updating their devices to Beta 4 over the coming weeks.

What's next?

Stay tuned for the official Android P launch coming soon! You can continue to share your feedback or requests in the meantime, and feel free to use our hotlists for platform issues, app compatibility issues, and third-party SDK issues.

Thanks for your feedback so far, and thank you to everyone who participated in our recent Reddit AMA on r/androiddev!

AndroidX Development is Now Even Better

$
0
0
Posted by Aurimas Liutikas, software engineer on AndroidX team

AndroidX (previously known as Android Support Library) started out as a small set of libraries intended to provide backwards compatibility for new Android platform APIs and, as such, its development was strictly tied to the platform. As a result, all work was done in internal Google branches and then pushed to the public Android Open Source Project (AOSP) together with the platform push. With this flow, external contributions were limited to a narrow window of time where the internal and AOSP branches were close in content. On top of that, it was difficult to contribute -- in order to do a full AndroidX build and testing, external developers had to check out >40GB of the full Android platform code.

Today, the scope of AndroidX has expanded dramatically and includes libraries such as AppCompat for easier UI development, Room for database management, and WorkManager for background work. Many of these libraries implement higher-level abstractions and are less tied to new revisions of the Android platform, and all libraries are designed with backwards compatibility in mind from the start. Several libraries, such as RecyclerView and Fragment, are purely AndroidX-side implementations with few ties to the platform.

Starting a little over two years ago, we began a process of unbundling -- moving AndroidX out of Android platform builds into its own separate build. We had to do a great deal of work, including migrating our builds from make to Gradle as well as migrating all of our API tracking tools and documentation generation out of the platform build. With that process completed, we reached a point where a developer can now check out a minimal AndroidX project, open it in Android Studio, and build using the public SDK and public Android Gradle Plugin.

The Android developer community has long expressed a desire to contribute more easily to AndroidX; however, this was always a challenge due to the reasons described above. This changes today: AndroidX development is moving to public AOSP. That means that our primary feature development (except for top-secret integrations with the platform 😀) and bug fixes will be done in the open using the r.android.com Gerrit review tool and changes will be visible in the aosp/androidx-master-dev branch.

We are making this change to give better transparency to developers; it gives developers a chance to see features and bug fixes implemented in real-time. We are also excited about receiving bug fix contributions from the community. We have written up a short guide on how to go about contributing a patch.

In addition to regular development, AOSP will be a place for experimentation and prototyping. You will see new libraries show up in this repository; some of them may be removed before they ship, change dramatically during pre-alpha development, or merge into existing libraries. The general rule is that only the libraries on maven.google.com are officially ready for external developer usage.

Finally, we are just getting started. We apologize for any rough edges that you might have when contributing to AndroidX, and we request your feedback via the public AndroidX tracker if you hit any issues.

Supporting display cutouts on edge-to-edge screens

$
0
0

Posted By Megan Potoski, Product Manager, Android System UI

Smartphones are quickly moving towards smaller bezels and larger aspect ratios. On these devices, display cutouts are a popular way to achieve an edge-to-edge experience while providing space for important sensors on the front of the device. There are currently 16 cutout devices from 11 OEMs already released, including several Android P beta devices, with more on the way.

These striking displays present a great opportunity for you to showcase your app. They also mean it's more important than ever to make sure your app provides a consistently great experience across devices with one or two display cutouts, as well as devices with 18:9 and larger aspect ratios.

Examples of cutout devices: Essential PH-1 (left) and Huawei P20 (right).

Make your app compatible with display cutouts

With many popular and upcoming devices featuring display cutouts, what can you do to make sure your app is cutout-ready?

The good news is, for the most part your app should work as intended even on a cutout device. By default, in portrait mode with no special flags set, the status bar will be resized to be at least as tall as the cutout and your content will display in the window below. In landscape or fullscreen mode, your app window will be letterboxed so that none of your content is displayed in the cutout area.

However, there are a few areas where your app could have issues on cutout devices.

  • Watch out for any sort of hard-coding of status bar height -- this will likely cause problems. If possible, use WindowInsetsCompat to get status bar height.
  • In fullscreen, be careful to consider when to use window vs. screen coordinates, as your app will not take up the whole screen when letterboxed. For example, if you use MotionEvent.getRawX/Y() to get screen coordinates for touch events, make sure to transform them to the view's coordinates using getLocationOnScreen().
  • Pay special attention to transitions in and out of fullscreen mode.

Here are a few guidelines describing what issues to look out for and how to fix them.

Take advantage of the cutout area

Rendering your app content in the cutout area can be a great way to provide a more immersive, edge-to-edge experience for users, especially for content like videos, photos, maps, and games.

An example of an app that has requested layout in the display cutout.

In Android P we added APIs to let you manage how your app uses the display cutout area, as well as to check for the presence of cutouts and get their positions.

You can use layoutInDisplayCutoutMode, a new window layout mode, to control how your content is displayed relative to the cutout. By default, the app's window is allowed to extend into the cutout area if the cutout is fully contained within a system bar. Otherwise, the window is laid out such that it does not overlap with the cutout. You can also set layoutInDisplayCutoutMode to always or never render into the cutout. Using SHORT_EDGES mode to always render into the cutout is a great option if you want to take advantage of the full display and don't mind if a bit of content gets obscured by the cutout.

If you are rendering into the cutout, you can use getDisplayCutout() to retrieve a DisplayCutout that has the cutout's safe insets and bounding box(es). These let you check whether your content overlaps the cutout and reposition things if needed.

<style name="ActivityTheme">
  <item name="android:windowLayoutInDisplayCutoutMode">
    default/shortEdges/never
  </item>
</style>

Attribute for setting layoutInDisplayCutoutMode from the Activity's theme.

For devices running Android 8.1 (API 27), we've also back-ported the layoutInDisplayCutoutMode activity theme attribute so you can control the display of your content in the cutout area. Note that support on devices running Android 8.1 or lower is up to the device manufacturer, however.

To make it easier to manage your cutout implementation across API levels, we've also added DisplayCutoutCompat in the AndroidX library, which is now available through the SDK manager.

For more about the display cutout APIs, take a look at the documentation.

Test your app with cutout

We strongly recommend testing all screens and experiences of your app to make sure that they work well on cutout devices. We recommend using one of the Android P Beta Devices that features a cutout, such as the Essential PH-1.

If you don't have a device, you can also test using a simulated cutout on any device running Android P or in the Android Emulator. This should help you uncover any issues that your app may run into on devices with cutouts, whether they are running Android 8.1 or Android P.

What to expect on devices with display cutouts

Android P introduces official platform support for display cutouts, with APIs that you can use to show your content inside or outside of the cutout. To ensure consistency and app compatibility, we're working with our device manufacturer partners to mandate a few requirements.

First, devices must ensure that their cutouts do not negatively affect apps. There are two key requirements:

  • In portrait orientation, with no special flags set, the status bar must extend to at least the height of the cutout.
  • In fullscreen or landscape orientation, the entire cutout area must be letterboxed.

Second, devices may only have up to one cutout on each short edge of the device. This means that:

  • You won't see multiple cutouts on a single edge, or more than two cutouts on a device.
  • You won't see a cutout on the left or right long edge of the device.

Within these constraints, devices can place cutouts wherever they want.

Special mode

Some devices running Android 8.1 (API level 27) or earlier may optionally support a "special mode" that lets users extend a letterboxed fullscreen or landscape app into the cutout area. Devices would typically offer this mode through a toggle in the navigation bar, which would then bring up a confirmation dialog before extending the screen.

Devices that offer "special mode" allow users to optionally extend apps into the cutout area if supported by the app.

If your app's targetSdkVersion is 27 or higher, you can set the layoutInDisplayCutoutMode activity theme attribute to opt-out of special mode if needed.

Don't forget: larger aspect ratios too!

While you are working on cutout support, it's also a great time to make sure your app works well on devices with 18:9 or larger aspect ratios, especially since these devices are becoming increasingly common and can feature display cutouts.

We highly encourage you to support flexible aspect ratios so that your app can leverage the full display area, no matter what device it's on. You should test your app on different display ratios to make sure it functions properly and looks good.

Here are some guidelines on screens support to keep in mind as you are developing, also refer to our earlier post on larger aspect ratios for tips on optimizing. If your app can't adapt to the aspect ratios on long screens, you can choose to declare a max aspect ratio to request letterboxing on those screens.

Thanks for reading, and we hope this helps you deliver a delightful experience to all your users, whatever display they may have!

Try out Digital Wellbeing to find your own balance with Pixel

$
0
0

We all love our phones—the cameras capture the memories we make, they find us the best route to work each day, and they answer the questions we have throughout the day. But many of us can probably use a little bit of help disconnecting from our devices from time to time so we can focus on the other things in our lives.

Earlier this year, we previewed Digital Wellbeing, a new set of features across our products that aims to help people achieve their desired balance with the technology they use every day. Starting today, all Pixel users can try these features out for themselves as part of an exclusive beta:

  • The new Dashboard helps you understand how you’re spending time on your phone, with a daily overview, a graphic of how frequently you use different apps, how many times you unlock your phone, and how many notifications you receive.
Digital Wellbeing.jpg
  • App Timers let you limit the amount of time you spend using your favorite apps.  

  • Do Not Disturb helps eliminate the anxiety you may feel as notifications pile up. When you’re in a “devices down” meeting or at dinner with friends, Do Not Disturb can be set to keep all visual interruptions from appearing on your screen, including notifications, as well as the sounds.

  • You can activate the Wind Down feature so that at night, as you get close to bedtime, your device goes into Do Not Disturb mode and your screen fades to grayscale to help you disconnect.

android 9 digital wellbeing - wind down

The Digital Wellbeing beta is available for download on Pixel, Pixel XL, Pixel 2 and Pixel 2 XL now, and it will be pushed as an update to all Pixel devices later this year. Get step-by-step instructions on downloading the beta app and try out Digital Wellbeing for yourself.

Pixel keeps getting better, and with Android 9, it’ll also get a number of other AI-backed features, including Adaptive Battery, Adaptive Brightness, and a revamped look and feel. Find out more about Android 9.

Our goal has always been to deliver our newest features to Pixel users as soon as they are ready, and the Digital Wellbeing beta and Android 9 are part of that. So dig in!

Android 9 Pie: Powered by AI for a smarter, simpler experience that adapts to you

$
0
0

The latest release of Android is here! And it comes with a heaping helping of artificial intelligence baked in to make your phone smarter, simpler and more tailored to you. Today we’re officially introducing Android 9 Pie.

We’ve built Android 9 to learn from you—and work better for you—the more you use it. From predicting your next task so you can jump right into the action you want to take, to prioritizing battery power for the apps you use most, to helping you disconnect from your phone at the end of the day, Android 9 adapts to your life and the ways you like to use your phone.

Tailored to you

Android 9 aims to make your phone even smarter by learning from you and adapting to your usage patterns. That’s why Android 9 comes with features like Adaptive Battery, which learns the apps you use most and prioritizes battery for them, and Adaptive Brightness, which learns how you like to set the brightness in different settings, and does it for you.

battery-brightness.gif

Android 9 also helps you get things done faster with App Actions, which predicts what you’ll want to do next based on your context and displays that action right on your phone. Say it’s Tuesday morning and you’re preparing for your commute: you’ll be suggested actions like navigating to work on Google Maps or resuming an audiobook with Google Play Books. And when you put in headphones after work, you may see options to call your mom or start your favorite Spotify playlist.

android 9 pie - app actions

Later this fall, we’ll also roll out Slices (pie...slices...get it?!) which shows relevant information from your favorite apps when you need it. If you start typing “Lyft” into Google Search, you’ll see a “slice” of the Lyft app, showing prices for your ride home and the ETA for a driver so you can take action more quickly and easily.

android 9 pie - slices

Now easy as pie

Making your phone smarter and more adaptive is important, but we also want Android to be easier to use and more approachable. In Android 9, we’ve introduced a new system navigation featuring a single home button.

This is especially helpful as phones grow taller and it’s more difficult to get things done on your phone with one hand. With a single, clean home button, you can swipe up to see a newly designed Overview, the spot where at a glance you have full-screen previews of your recently used apps.

Swipe up from anywhere to see full-screen previews of recently used apps and simply tap to jump back into one of them. If you find yourself constantly switching between apps on your Pixel, we’ve got good news for you: Smart Text Selection (which recognizes the meaning of the text you’re selecting and suggests relevant actions) now works on the Overview of your recent apps, making it easier to perform the action you want. You can enable this new system navigation in Settings once you’ve received your update to Android 9 (learn more in the help center).

Intuitive_nav1.gif

Changing how you navigate your phone is a big deal, but small changes can make a big difference too. Android P also brings a redesigned Quick Settings, a better way to take and edit screenshots (say goodbye to the vulcan grip that was required before), simplified volume controls, an easier way to manage notifications and more. You’ll notice small changes like these across the platform, to help make the things you do all the time easier than ever.  

Find the balance that’s right for your life

While much of the time we spend on our phones is useful, many of us wish we could disconnect more easily and free up time for other things. In fact, over 70 percent of people we talked to in our research said they want more help with this. So we’ve been working to add key capabilities right into Android to help people achieve the balance with technology they’re looking for. 

At Google I/O in May, we previewed some of these digital wellbeing features for Android, including a new Dashboard that helps you understand how you’re spending time on your device; an App Timer that lets you set time limits on apps and grays out the icon on your home screen when the time is up; the new Do Not Disturb, which silences all the visual interruptions that pop up on your screen; and Wind Down, which switches on Night Light and Do Not Disturb and fades the screen to grayscale before bedtime.

AndroidPie_DigitalWellbeing

Digital Wellbeing will officially launch on Pixel phones this fall, with Android One and other devices coming later this year. But these features are available in beta now for Pixel phones running Android 9. To try them out:

  1. Make sure you’re running Android 9 Pie on your device. (Learn how to check which version of Android you have.)

  2. Sign up for the beta with the email address you use with Google Play.

  3. Accept your invitation to become a beta tester by clicking the link in your welcome email.

Once you’ve accepted your invitation, Digital Wellbeing will appear in your phone’s Settings app. It may take up to 24 hours for Digital Wellbeing to appear on your device.

Security and privacy baked in

Improving security is always important in each of our platform releases.  In addition to continuously hardening the platform, and an improved security model for biometrics, Android 9 enables industry-leading hardware security capabilities to allow protecting sensitive data like credit card information using a secure, dedicated chip.  Android 9 also brings important privacy improvements, such as TLS by default and DNS over TLS to help protect all web communications and keep them private.

Coming to a device near you

Starting today, an over-the-air update to Android 9 will begin rolling out to Pixel phones. And devices that participated in the Beta program from Sony Mobile, Xiaomi, HMD Global, Oppo, Vivo, OnePlus and Essential, as well as all qualifying Android One devices, will receive this update by the end of this fall. We're also working with a number of other partners to launch or upgrade devices to Android 9 this year.

Learn more about Android 9 Pie at android.com/9.


Introducing Android 9 Pie

$
0
0

Posted by Dave Burke, VP of Engineering

After more than a year of development and months of testing by early adopters, we're ready to launch Android 9 Pie, the latest release of Android, to the world.

Android 9 harnesses the power of machine learning to make your phone smarter, simpler, and tailored to you. Read all about the new consumer features here. For developers, Android 9 includes many new ways to enhance your apps and build new experiences to drive engagement.

You've given us tons of feedback along the way--over a thousand bugs and feature requests--thank you! More than 140,000 of you tried our preview builds through the Android Beta program, and seven of our device maker partners also brought our Beta to their flagship devices, enabling users around the world to give their feedback too.

Today we're pushing the source code to Android Open Source Project (AOSP), and starting the Android 9 rollout to all Pixel users worldwide, with Android 9 coming to many more devices in the coming months.

We continue to move Android forward as the premier open platform for developers worldwide to build their businesses. With Android 9 -- together with the powerful new capabilities in Google Play for apps and games -- we're committed to helping you build great experiences, as well as reach and engage the right users safely and cost-effectively around the world.

What's in Android 9?

A smarter smartphone, with machine learning at the core

Android 9 helps your phone learn as you use it, by picking up on your preferences and adjusting automatically. Everything from helping users get the most out of their battery life to surfacing the best parts of the apps they use all the time, right when they need it most, Android 9 keeps things running smoother, longer.

Adaptive Battery

We partnered with DeepMind on a feature called Adaptive Battery that uses machine learning to prioritize system resources for the apps the user cares about most. If your app is optimized for Doze, App Standby, and Background Limits, Adaptive Battery should work well for you right out of the box. If you haven't yet taken optimized your app, make sure to check out the details in the power documentation to see how it works.

Slices

Slices can help users perform tasks faster by enabling engagement outside of the fullscreen app experience. It does this by using UI templates that can display rich, dynamic, and interactive content from your app from within the Google Search app and later in other places like the Google Assistant. You can learn more about building Slices to enhance your app here.

App Actions

App Actions is a new way to raise the visibility of your app and drive engagement. Actions take advantage of machine learning to surface your app to the user at just the right time, based on your app's semantic intents and the user's context.

We'll be sharing more details in the coming weeks on registering your app to handle one or more user intents, so your apps can be enabled for App Actions and surfaced across multiple Google and Android surfaces in response to user queries.

Text Classifier and Smart Linkify

We've extended the ML models that identify entities in content or text input to support more types like Dates and Flight Numbers through the TextClassifier API. Smart Linkify lets you take advantage of the TextClassifier models through the Linkify API, including enriched options for quick follow-on user actions. Smart Linkify also delivers significant improvements in accuracy of detection as well as performance.

Neural Networks API 1.1

Android 9 adds an updated version of the Neural networks API, to extend Android's support for accelerated on-device machine learning. Neural Networks 1.1 adds support for nine new ops -- Pad, BatchToSpaceND, SpaceToBatchND, Transpose, Strided Slice, Mean, Div, Sub, and Squeeze. A typical way to take advantage of the APIs is through TensorFlow Lite.

Getting the most from your phone -- more easily

We're excited about making your smartphone more intelligent. But it's also important that the technology fades to the back for users. In Android 9, we've evolved Android's UI to be simpler and more approachable -- for developers, these changes help improve the way users find, use, and manage your apps.

New system navigation

Android 9 introduces a new system navigation that we've been working on for more than a year. The new design helps make Android's multitasking more approachable and makes discovering apps much easier. You can swipe up from anywhere to see full-screen previews of recently used apps and simply tap to jump back into one of them.

Display cutout

Now your app can take full advantage of the latest edge-to-edge screens through display cutout support in Android 9. For most apps, supporting display cutout is seamless, with the system managing status bar height to separate your content from the cutout. If you have immersive content, you can use the display cutout APIs to check the position and shape of the cutout and request full-screen layout around it. To help with development and testing, we've added a Developer Option that simulates several cutout shapes on any device.

Apps with immersive content can display content fullscreen on devices with a display cutout.

Notifications and smart reply

Android 9 makes notifications even more useful and more actionable. Messaging apps can take advantage of the new MessagingStyle APIs to show conversations, attach photos and stickers, and even suggest smart replies. You'll soon be able to use ML Kit to generate smart reply suggestions for your app.

MessagingStyle notifications with conversations and smart replies [left], images and stickers [right].

Text Magnifier

In Android 9 we've added a Magnifier widget to improve the user experience of selecting text. The Magnifier widget lets users precisely position the cursor or the text selection handles by viewing zoomed text through a draggable pane. You can attach it to any view that is attached to a window, so you can use it in custom widgets or during custom text-rendering. The Magnifier widget can also provide a zoomed-in version of any view or surface, not just text.

Check out our recent blog post for more about this and other Text features, such as PrecomputedText and line height and baseline text alignment.

Security and privacy for users

Biometric prompt

With a range of biometric sensors in use for authentication, we've made the experience more consistent across sensor types and apps. Android 9 introduces a system-managed dialog to prompt the user for any supported type of biometric authentication. Apps no longer need to build their own dialog--instead they use the BiometricPrompt API to show the standard system dialog. In addition to Fingerprint (including in-display sensors), the API supports Face and Iris authentication.

If your app is drawing its own fingerprint auth dialogs, you should switch to using the BiometricPrompt API as soon as possible. See this post for more information.

Protected Confirmation

Android 9 introduces Android Protected Confirmation, which uses the Trusted Execution Environment (TEE) to guarantee that a given prompt string is shown and confirmed by the user. Only after successful user confirmation will the TEE then sign the prompt string, which the app can verify.

Stronger protection for private keys

We've added StrongBox as a new KeyStore type, providing API support for devices that provide key storage in tamper-resistant hardware with isolated CPU, RAM, and secure flash. You can set whether your keys should be protected by a StrongBox security chip in your KeyGenParameterSpec.

DNS over TLS

Android 9 adds built-in support for DNS over TLS, automatically upgrading DNS queries to TLS if a network's DNS server supports it. Users can manage DNS over TLS behavior in a new Private DNS Mode in Network & internet settings. Apps that perform their own DNS queries can use a new API, LinkProperties.isPrivateDnsActive(), to check the DNS mode. More in this post.

HTTPS by default

As part of a larger effort to move all network traffic away from cleartext (unencrypted HTTP) to websites secured with TLS (HTTPS), we're changing the defaults for Network Security Configuration to block all cleartext traffic. You'll now need to make connections over TLS, unless you explicitly opt-in to cleartext for specific domains. See the details here.

Compiler-based security mitigations

In Android 9 we've expanded our use of compiler-level mitigations to harden the platform through run-time detection of dangerous behavior. Control Flow Integrity (CFI) techniques help to prevent code-reuse attacks and arbitrary code execution. In Android 9 we've greatly expanded CFI usage within the media framework and other security-critical components, such as NFC and Bluetooth. We've also introduced CFI kernel support into the Android common kernel when building with LLVM.

We've also expanded our use of Integer overflow sanitizers to mitigate memory-corruption and information-disclosure vulnerabilities. We've prioritized sanitizers in libraries with past vulnerabilities or where complex untrusted input is processed, such as libui, libnl, libmediaplayerservice and others. See this post for details.

Privacy for users

Android 9 safeguards privacy in a number of new ways. The system now restricts access to mic, camera, and all SensorManager sensors from apps that are idle. While your app's UID is idle, the mic reports empty audio and sensors stop reporting events. Cameras used by your app are disconnected and will generate an error if the app tries to use them. In most cases, these restrictions should not introduce new issues for existing apps, but we recommend removing these requests from your apps.

Android 9 also gives the user control over access to the platform's build.serial identifier by putting it behind the READ_PHONE_STATE permission. To access the build.serial identifier, you should use the Build.getSerial() method.

Read more about all of the privacy changes here.

New experiences in camera, audio, and graphics

Multi-camera API and other camera updates

With Android 9 you can now open streams from two or more physical cameras simultaneously on devices that support the multi-camera API. On devices with either dual-front or dual-back cameras, you can create innovative features not possible with just a single camera, such as seamless zoom, bokeh, and stereo vision. The API also lets you call a logical or fused camera stream that automatically switches between two or more cameras.

Other improvements in camera include new Session parameters that help to reduce delays during initial capture, and Surface sharing that lets camera clients handle various use-cases without the need to stop and start camera streaming. We've also added APIs for display-based flash support and access to OIS timestamps for app-level image stabilization and special effects.

HDR VP9 Video and HEIF image compression

Android 9 adds built-in support for HDR VP9 Profile 2, so you can now deliver HDR-enabled movies to your users on HDR-capable devices.

We're excited to add HEIF (heic) image encoding to the platform. HEIF is a popular format for photos that improves compression to save on storage and network data. With platform support on Android 9 devices, it's easy to send and utilize HEIF images from your backend server. Once you've made sure that your app is compatible with this data format for sharing and display, give HEIF a try as an image storage format in your app. You can do a jpeg-to-heic conversion using ImageDecoder or BitmapFactory to obtain a bitmap from jpeg, and you can use HeifWriter in the AndroidX library to write HEIF still images from YUV byte buffer, Surface, or Bitmap.

Enhanced audio with Dynamics Processing

The Dynamics Processing API lets you use a new audio effect to isolate specific frequencies and lower loud or increase soft sounds to enhance the acoustic quality of your app. For example, you can improve the sound of someone who speaks quietly in a loud, distant or otherwise acoustically challenging environment. The API gives you access to a multi-stage, multi-band dynamics processing effect that includes a pre-equalizer, a multi-band compressor, a post-equalizer and a linked limiter.

ImageDecoder for bitmaps and drawables

An ImageDecoder API gives you an easier way to decode images to bitmaps or drawables. You can create a bitmap or drawable from a byte buffer, file, or URI. The API offers several advantages over BitmapFactory, including support for exact scaling, single-step decoding to hardware memory, support for post-processing in decode, and decoding of animated images. You can read more here.

Connectivity and location

Wi-Fi RTT for indoor positioning

Android 9 lets you build indoor positioning features into your apps through platform support for the IEEE 802.11mc Wi-Fi protocol -- also known as Wi-Fi Round-Trip-Time (RTT). On Android 9 devices with hardware support, location permission, and location enabled, your apps can use RTT APIs to measure the distance to nearby Wi-Fi Access Points (APs). The device doesn't need to connect to the APs to use RTT, and to maintain privacy, only the phone is able to determine the distance, not the APs.

Knowing the distance to 3 or more APs, you can calculate the device position with an accuracy of 1 to 2 meters. With this accuracy you can support use-cases like in-building navigation; fine-grained location-based services such as disambiguated voice control (e.g. 'Turn on this light'); and location-based information (e.g. 'Are there special offers for this product?').

Data cost sensitivity in JobScheduler

JobScheduler is Android's central service to help you manage scheduled tasks or work across Doze, App Standby, and Background Limits. In Android 9, JobScheduler handles network-related jobs better for the user, coordinating with network status signals provided separately by carriers. Jobs can now declare their estimated data size, signal prefetching, and specify detailed network requirements—carriers can report networks as being congested or unmetered. JobScheduler then manages work according to the network status. For example, when a network is congested, JobScheduler might defer large network requests. When unmetered, it can run prefetch jobs to improve the user experience, such as prefetching headlines.

Open Mobile API for NFC payments and secure transactions

Android 9 adds an implementation of the GlobalPlatform Open Mobile API to Android. On supported devices, apps can use the OMAPI API to access secure elements (SE) to enable smart-card payments and other secure services. A hardware abstraction layer (HAL) provides the underlying API for enumerating the variety of Secure Elements (eSE, UICC, and others) available.

Performance for apps

ART performance

Android 9 brings performance and efficiency improvements to all apps through the ART runtime. We've expanded ART's use of execution profiles to optimize apps and reduce the in-memory footprint of compiled app code. ART now uses profile information for on-device rewriting of DEX files, with reductions up to 11% across a range of popular apps. We expect these to correlate closely with reductions in system DEX memory usage and faster startup times for your apps.

Optimized for Kotlin

Kotlin is a first-class language on Android, and if you haven't tried it yet, you should! We've made an enduring commitment to Kotlin in Android and continue to expand support including optimizing the performance of Kotlin code. In Android 9, you'll see the first results of this work--we've improved several compiler optimizations, especially those that target loops, to extract better performance. We're also continuing to work in partnership with JetBrains to optimize Kotlin's generated code. You can get all of the latest Kotlin performance improvements just by keeping Android Studio's Kotlin plugin up-to-date.

Today, we are also releasing an update to the Android 9 - API 28 SDK (rev. 6), which contains nullability annotations in some of the most frequently used APIs. We'll provide more details about this in an upcoming post.

Modern Android

As part of Android 9 we are modernizing the foundations of Android and the apps that run on it, as part of our deep, sustained investments in security, performance, and stability.

As we announced last year, Google Play will require all app updates to target Android Oreo (targetSdkVersion 26 or higher) by November 2018. In line with that, if your app targets a platform earlier than Android 4.2 (API level 17), users installing it will see a warning dialog after that day. Here's a checklist of resources for help and support as you migrate -- we're looking forward to seeing your apps getting the most from modern Android.

Get your apps ready for Android 9!

With Android 9 coming to Pixel users starting today, and to other devices in the months ahead, it's important to test your app for compatibility as soon as possible. Just install your current app from Google Play on a device or or emulator running Android 9. As you work through the flows, make sure your app runs and looks great, and that it handles the Android 9 behavior changes properly.

Also watch for uses of non-SDK interfaces in your app. Android 9 restricts access to selected non-SDK interfaces, so you should reduce your reliance on them. See our recent post for details.

After you've made any necessary updates, we recommend publishing to Google Play right away. without changing the app's platform targeting. This lets you ensure a great experience for Android 9 users while you work on enhancing your app with Android 9 APIs and targeting.

Enhance your app with Android 9 features and APIs

When you're ready, dive into Android 9 and build with the new features and APIs in Android 9.

To get started, just download the official API 28 SDK and the latest tools and emulator images into Android Studio 3.1, or use the latest version of Android Studio 3.2. Then update your project's compileSdkVersion and targetSdkVersion to API 28. When you change your targeting, make sure your app supports all of the applicable behavior changes.

As soon as you're ready, publish your APK updates to Google Play. A common strategy is to use Google Play's beta testing feature to get early feedback from a small group of users and then do a staged rollout to production.

Visit the Android 9 site for details and developer documentation. Also check out this video and the Google I/O Android Playlist for more on what's new in Android 9 for developers.

Coming to a device near you

Starting today, an over-the-air update to Android 9 will begin rolling out to Pixel phones. And devices that participated in the Beta program from Sony Mobile, Xiaomi, HMD Global, Oppo, Vivo, OnePlus, and Essential, as well as all qualifying Android One devices, will receive this update by the end of this fall! We are also working with a number of other partners to launch or upgrade devices to Android 9 this year.

As always, the system images for Pixel devices are available here for manual flash and download. If you're looking for the Android 9 source, you'll find it here in the Android Open Source Project repository under the Android 9 branches.

What's next?

Now that we've reached the official release, we're bringing the Developer Preview to a close. We'll soon be closing the Developer Preview issue tracker to new issues, so if you have feedback, feel free to file a new issue against Android 9 in the AOSP issue tracker.

Thanks again to the many developers and early adopters who participated in the Android 9 Developer Preview and public beta. Your contributions have been critical to making the Android 9 platform a great one for developers and consumers.

Android 9 Pie: Powered by AI for a smarter, simpler experience that adapts to you

$
0
0
https://lh6.googleusercontent.com/bKLmlK_9gXQcA9G0PhSyp2lCZ282r8irBHlyVflIjPCSl6D0wX0aaQGfTHYUoAqxzkFDvbZcveXcCjBIs7it2wEcjbPglWoDAMJH88btu7O_isbZM5VUdZPLfpDQFSz_Fy0EOl7N
The latest and greatest release of Android is officially here! Android 9 Pie comes with features that make your phone smarter and simpler to use, plus help you with your digital wellbeing.




The latest release of Android is here! And it comes with a heaping helping of artificial intelligence baked in to make your phone smarter, simpler and more tailored to you. Today we’re officially introducing Android 9 Pie.


We’ve built Android 9 to learn from you—and work better for you—the more you use it. From predicting your next task so you can jump right into the action you want to take, to prioritizing battery power for the apps you use most, to helping you disconnect from your phone at the end of the day, Android 9 adapts to your life and the ways you like to use your phone.


Tailored to you


Android 9 aims to make your phone even smarter by learning from you and adapting to your usage patterns. That’s why Android 9 comes with features like Adaptive Battery, which learns the apps you use most and prioritizes battery for them, and Adaptive Brightness, which learns how you like to set the brightness in different settings, and does it for you.
 
Android 9 also helps you get things done faster with App Actions, which predicts what you’ll want to do next based on your context and displays that action right on your phone. Say it’s Tuesday morning and you’re preparing for your commute: you’ll be suggested actions like navigating to work on Google Maps or resuming an audiobook with Google Play Books. And when you put in headphones after work, you may see options to call your mom or start your favorite Spotify playlist.
Later this fall, we’ll also roll out Slices (pie...slices...get it?!) which shows relevant information from your favorite apps when you need it. If you start typing “Lyft” into Google Search, you’ll see a “slice” of the Lyft app, showing prices for your ride home and the ETA for a driver so you can take action more quickly and easily.




Android 9 — now easy as pie
Making your phone smarter and more adaptive is important, but we also want Android to be easier to use and more approachable. In Android 9, we’ve introduced a new system navigation featuring a single home button.


This is especially helpful as phones grow taller and it’s more difficult to get things done on your phone with one hand. With a single, clean home button, you can swipe up to see a newly designed Overview, the spot where at a glance you have full-screen previews of your recently used apps.


You can swipe up from anywhere to see full-screen previews of recently used apps and simply tap to jump back into one of them. If you find yourself constantly switching between apps on your Pixel, we’ve got good news for you: Smart Text Selection (which recognizes the meaning of the text you’re selecting and suggests relevant actions) now works on the Overview of your recent apps, making it easier to perform the action you want.

Changing how you navigate your phone is a big deal, but small changes can make a big difference too. Android P also brings a redesigned Quick Settings, a better way to take and edit screenshots (say goodbye to the vulcan grip that was required before), simplified volume controls, an easier way to manage notifications and more. You’ll notice small changes like these across the platform, to help make the things you do all the time easier than ever.  


Find the balance that’s right for your life
While much of the time we spend on our phones is useful, many of us wish we could disconnect more easily and free up time for other things. In fact, over 70 percent of people we talked to in our research said they want more help with this. So we’ve been working to add key capabilities right into Android to help people achieve the balance with technology they’re looking for.  


At Google I/O in May, we previewed some of these digital wellbeing features for Android, including a new Dashboard that helps you understand how you’re spending time on your device; an App Timer that lets you set time limits on apps and grays out the icon on your home screen when the time is up; the new Do Not Disturb, which silences all the visual interruptions that pop up on your screen; and Wind Down, which switches on Night Light and Do Not Disturb and fades the screen to grayscale before bedtime.

Digital Wellbeing will officially launch on Pixel phones this fall, with Android One and other devices coming later this year. But these features are available in beta now for Pixel phones running Android 9. To try them out:

  1. Make sure you’re running Android 9 Pie on your device. (Learn how to check which version of Android you have.)
  2. Sign up for the beta with the email address you use with Google Play.
  3. Accept your invitation to become a beta tester by clicking the link in your welcome email.

Once you’ve accepted your invitation, Digital Wellbeing will appear in your phone’s Settings app. It may take up to 24 hours for Digital Wellbeing to appear on your device.


Security and privacy baked in


Improving security is always important in each of our platform releases.  In addition to continuously hardening the platform, and an improved security model for biometrics, Android 9 enables industry-leading hardware security capabilities to allow protecting sensitive data like credit card information using a secure, dedicated chip.  Android 9 also brings important privacy improvements, such as TLS by default and DNS over TLS to help protect all web communications and keep them private.


Coming to a device near you
Starting today, an over-the-air update to Android 9 will begin rolling out to Pixel phones. And devices that participated in the Beta program from Sony Mobile, Xiaomi, HMD Global, Oppo, Vivo, OnePlus, and Essential, as well as all qualifying Android One devices, will receive this update by the end of this fall! We are also working with a number of other partners to launch or upgrade devices to Android 9 this year.


Learn more about Android 9 Pie at android.com/9.

Posted by: Sameer Samat, VP of Product Management, Android & Google Play

Android Pie SDK is now more Kotlin-friendly

$
0
0

Posted by James Lau, Product Manager (@jmslau)

When using the Java programming language, one of the most common pitfalls is trying to access a member of a null reference, causing a NullPointerException to be thrown. Kotlin offers protection against this by baking nullable and non-nullable types into the type system. This helps eliminate NullPointerExceptions from your code and improve your app's overall quality. When Kotlin code is calling into APIs written in the Java programming language, it relies on nullability annotations in those APIs to determine the nullability of each parameter and the return type. Unannotated parameters and return types are treated as platform types, which weakens the null-safety guarantee of Kotlin.

As part of yesterday's Android 9 announcement, we have also released a new Android SDK that contains nullability annotations for some of the most frequently used APIs. This will preserve the null-safety guarantee when your Kotlin code is calling into any annotated APIs in the SDK. Even if you are using the Java programming language, you can still benefit from these annotations by using Android Studio to catch nullability contract violations.

Not a breaking change

Normally, nullability contract violations in Kotlin result in compilation errors. But to ensure the newly annotated APIs are compatible with your existing code, we are using an internal mechanism provided by the Kotlin compiler team to mark the APIs as recently annotated. Recently annotated APIs will result only in warnings instead of errors from the Kotlin compiler. You will need to use Kotlin 1.2.60 or later.

Our plan is to have newly added nullability annotations produce warnings only, and increase the severity level to errors starting in the following year's Android SDK. The goal is to provide you with sufficient time to update your code.

How to use the "Kotlin-friendly" SDK

To get started, go to Tools > SDK Manager in Android Studio. Select Android SDK on the left menu, and make sure the SDK Platforms tab is open.

Use SDK Manager in Android Studio to install SDK for API Level 28 Revision 6

Check Android 8.+ (P) and click OK. This will install the Android SDK Platform 28 revision 6 if it is not already installed. After that, set your project's compile SDK version to API 28 to start using the new Android Pie SDK with nullability annotations.

Use the Project Structure Dialog to change your project's Compile Sdk Version to API 28

You may also need to update your Kotlin plugin in Android Studio if it's not already up-to-date. Make sure your Kotlin plugin version is 1.2.60 or later by going to Tools > Kotlin > Configure Kotlin Plugin Updates.

Once it's set up, your builds will start showing warnings if you have any code that violates nullability contracts in the Android SDK. An example of such a warning is shown below.

Sample warning from the Kotlin compiler when code violates a recently added nullability contract in the Android SDK.

You will also start seeing warnings in Android Studio's code editor if you call an Android API with the incorrect nullability. An example is shown below.

Android Studio warning about passing a null reference to a parameter annotated as a recently non-null type in the android.graphics.Path API.

Leveraging nullability annotations from the Java programming language

You can benefit from the new nullability annotations even if your code is in the Java programming language. By default, Android Studio will highlight any nullability contract violations with a warning, like the one below:

Android Studio showing a warning about nullability contract violation in code written in the Java programming language

To ensure that you have this inspection enabled, you can go to the IDE's settings page and search for "Constant conditions & exceptions" inspection and make sure that item is checked.

Use the Inspections page under Settings to ensure the Constant conditions & exceptions code inspection is enabled.

If you are using the Java programming language, nullability contract violations will not produce any compiler warning or error. Only the in-IDE code inspections are available to flag these issues.

You can also run code inspections across your entire project and see the aggregated results. Click on Analyze > Inspect Code… to start.

What's Next

The Android SDK API surface is very large, and we have only annotated a small percentage of the APIs so far - there is still lots of work remaining. Over the next several Android SDK releases, we will continue to add nullability annotations to the existing Android APIs, as well as making sure new APIs are annotated.

With the "Kotlin-friendly" Android SDK, the nullability annotations in AndroidX (part of the Jetpack family), and Android KTX, we are continuing to improve the Android APIs for developers using Kotlin. If you have not yet tried Kotlin, we encourage you to try it. Not only can Kotlin make your code more concise, it can also improve the stability of your apps.

Happy Kotlin-ing!

Introducing the new lead for Android Open Source Project

$
0
0
This week began with the announcement of Android 9 Pie and, as usual, the subsequent upstreaming of code to the Android Open Source Project (AOSP). But the release of Android 9 isn’t the only important Android news!

Tucked away in the announcement to the Android Building mailing list was this note:

“I also wanted to take a moment to introduce myself as the new Tech Lead / Manager for AOSP. My name is Jeff Bailey, and I’ve been involved in the Open Source community for more than two decades. Since I joined the Android team a few months ago, I’ve been learning how we do things and getting an understanding of how we could work better with the community. I’d love to hear from you: @JeffBaileyAOSP on Twitter or jeffbailey+aosp@google.com. Be well!”

As Jeff notes in his introduction, he has a history in free and open source software (FOSS). He’s been an avid user, contributor, and maintainer since before the Open Source Definition was inked!

Jeff co-founded Savannah, where GNU software is developed and distributed, spent 15 years working on Debian, and has been an Ubuntu core developer. Further, he spent some time on the Google Open Source team and was involved in open sourcing Android back in 2008.

Open source projects, even those which originate inside of companies, are powered by the community of users and contributors that surround them. And those communities thrive when they have stewards who are steeped in the traditions of free and open source software. We’re excited for AOSP as Jeff takes the reins. He brings both technical and cultural skills to the table, and he’s been involved with the project since the beginning!

Suffice it to say, AOSP is in good hands. We welcome Jeff to his new role and, as he said in his introduction, he’d love to hear from the community: you can reach Jeff on Twitter and via email.

By Josh Simmons, Google Open Source

Meet the first Indie Games Accelerator class

$
0
0

Posted by Vineet Tanwar, Business Development Manager, Google Play

In June, we announced the Indie Games Accelerator, a new four month program to help indie game startups from India, Pakistan and Southeast Asia supercharge their growth on Android. We have been truly impressed by the overwhelming responses we have received, and the creativity that indie game developers from these regions have to offer.

We had a great time going through the applications and playing the games which were submitted for review. Now, it's finally time to announce the inaugural class of startups selected for the program who we will mentor and coach over the next few months. Here they are:

Congratulations to the selected participants and a huge thanks to everyone that applied! Find out more about the program or express your interest in joining next class of Indie Games Accelerator.

How useful did you find this blogpost?

Looking forward with Google Play

$
0
0

Posted by Purnima Kochikar, Director, Google Play, Apps & Games

On Monday we released Android 9 Pie. As we continue to push the Android platform forward, we're always looking to provide new ways to distribute your apps efficiently, help people discover and engage with your work, and improve the overall security of our ecosystem. Google Play has had a busy year so far with some big milestones around helping you reach more users, including:

  • Shrinking download size: Android App Bundle & Dynamic Delivery has helped reduce app sizes by up to 65%, leading to increased downloads and fewer uninstalls.
  • Helpling improve quality: New tools in the Play Console have helped you reduce crash rates by up to 70%.
  • Improving discovery: Improvements to the discovery experience has increased Google Play Store visits by 30% over the last 12 months.
  • Keeping users safe: Google Play Protect scans more than 50 billion apps a day and Android API level 26 adoption requirements improve app security and performance.

Google Play is dedicated to helping you build and grow quality app businesses, reach the more than 2 billion Android devices globally and provide your users with better experiences. Here are some of the important areas we're prioritizing this year:

Innovative Distribution

We've added more testing tools to the popular Play Console to help developers de-risk app launches with internal and external test tracks and staged rollouts to get valuable early feedback. This year we've expanded the Start on Android program globally that provides developers new to Android additional guidance to optimize their apps before launch. Google Play Instant remains a huge bet to transform app discovery and improve conversions by letting users engage without the friction of installing. We're seeing great results from early adopters and are working on new places to surface instant experience, including ads, and making them easier to build throughout the year.

Improving App Quality

Google Play plays an important role helping developers understand and fix quality and performance issues. At I/O, we showcased how we expanded the battery, stability and rendering of Android vitals reporting to include app start time & permission denials, enabling developers to cut application not responding errors by up to 95%. We also expanded the functionality of automated device testing with the pre-launch report to enable games testing. Recently, we increased the importance of app quality in our search and discovery recommendations that has resulted in higher engagement and satisfaction with downloaded games.

Richer Discovery

Over the last year we've rolled out more editorial content and improved our machine learning to deliver personalized recommendations for apps and games that engage users. Since most game downloads come from browsing (as opposed to searching or deep linking into) the store, we've put particular focus on games discovery, with a new games home page, special sections for premium and new games, immersive video trailers and screenshots, and the ability to try games instantly. We've also introduced new programs to help drive app downloads through richer discovery. For example, since launching our app pre-registration program in 2016, we've seen nearly 250 million app pre-registrations. Going forward, we'll be expanding on these programs and others like LiveOps cards to help developers engage more deeply with their audience.

Expanding Commerce Platform

Google Play now collects payments in 150 markets via credit card, direct carrier billing (DCB), Paypal, and gift cards. Direct carrier billing is now enabled across 167 carriers in 64 markets. In 2018, we have focused on expanding our footprint in Africa and Latam with launches in Ghana, Kenya, Tanzania, Nigeria, Peru & Colombia. And users can now buy Google Play credit via gift cards or other means in more 800,000 retail locations around the world. This year, we also launched seller support in 18 new markets bringing the total markets with seller support to 98. Our subscription offering continues to improve with ML-powered fraud detection and even more control for subscribers and developers. Google Play's risk modeling automatically helps detect fraudulent transactions and purchase APIs help you better analyze your refund data to identify suspicious activity.

Maintaining a Safe & Secure Ecosystem

Google Play Protect and our other systems scan and analyze more than 50 billion apps a day to keep our ecosystem safe for users and developers. In fact, people who only download apps from Google Play are nine times less likely to download a potentially harmful app than those who download from other sources. We've made significant improvements in our ability to detect abuse—such as impersonation, inappropriate content, fraud, or malware—through new machine learning models and techniques. The result is that 99% of apps with abusive content are identified and rejected before anyone can install them. We're also continuing to run the Google Play Security Rewards Program through a collaboration with Hacker One to discover other vulnerabilities.

We are continually inspired by what developers build—check out #IMakeApps for incredible examples—and want every developer to have the tools needed to succeed. We can't wait to see what you do next!

The Machine Learning Behind Android Smart Linkify

$
0
0


Earlier this week we launched Android 9 Pie, the latest release of Android that uses machine learning to make your phone simpler to use. One of the features in Android 9 is Smart Linkify, a new API that adds clickable links when certain types of entities are detected in text. This is useful when, for example, you receive an address from a friend in a messaging app and want to look it up on a map. With a Smart Linkify-annotated text, it’s a lot easier!
Smart Linkify is a new version of the existing Android Linkify API. It is powered by a small feed-forward neural network (500kB per language) with low latency (less than 20ms on Google Pixel phones) and small inference code (250kB), and uses essentially the same machine learning technology that powers Smart Text Selection (released as part of Android Oreo) to now also create links.

Smart Linkify is available as an open-source TextClassifier API in Android (as the generateLinks method). The models were trained using TensorFlow and exported to a custom inference library backed by TensorFlow Lite and FlatBuffers. The C++ inference library for the models is available as part of Android Open-Source framework here, and runs on each text selection and Smart Linkify API calls.

Finding Entities
Looking for phone numbers and postal addresses in text is a difficult problem. Not only are there many variations in how people write them, but it’s also often ambiguous what type of entity is being represented (e.g. “Confirmation number: 857-555-3556” is not a phone number even though it it takes a similar form to one). As a solution, we designed an inference algorithm with two small feedforward neural networks at its heart. This algorithm is general enough to perform all kinds of entity chunking beyond just addresses and phone numbers.

Overall, the system architecture is as follows: A given input text is first split into words (based on space separation), then all possible word subsequences of certain maximum length (15 words in our case) are generated, and for each candidate the scoring neural net assigns a value (between 0 and 1) based on whether it represents a valid entity:
For the given text string, the first network assigns low scores to non-entities and a high score for the candidate that correctly selects the whole phone number.
Next, the generated entities that overlap are removed, favoring the ones with the higher score over the conflicting ones with a lower score. Now, we have a set of entities, but still don’t know their types. So now the second neural network is used to classify the type of the entity, as either a phone number, address or in some cases, a non-entity.

In our example, the only non-conflicting entities are “And call 857 555 3556tomorrow.” (with “857 555 3556” classified as a phone number), and “And call 857 555 3556 tomorrow.” (with “And” classified as a non-entity).

Now that we have the only non-conflicting entities, “And call 857 555 3556 tomorrow.” (with “857 555 3556” classified as a phone number) and “And call 857 555 3556 tomorrow.” (with “And” classified as a non-entity), we are easily able to underline them in the displayed text on the screen, and run the right app when clicked.

Textual Features
So far, we’ve given a general description of the way Smart Linkify locates and classifies entities in a string of text. Here, we go into more detail on how the text is processed and fed to the network.

The task of the networks, given an entity candidate in the input text, is to determine whether the entity is valid, and then to classify it. To do this, the networks need to know the context surrounding the entity (in addition to the text string of the entity itself). In machine learning this is done by representing these parts as separate features. Effectively, the input text is split into several parts that are fed to the network separately:
Given a candidate entity span, we extract: Left context: five words before the entity, Entity start: first three words of the entity, Entity end: last three words of the entity (they can be duplicated with the previous feature if they overlap, or padded if there are not that many), Right context: five words after the entity, Entity content: bag of words inside the entity and Entity length: size of the entity in number of words. They are then concatenated together and fed as an input to the neural network.
The feature extraction operates with words, and we use character n-grams and a capitalization feature to represent the individual words as real vectors suitable as an input of the neural network:
  • Character N-grams. Instead of using the standard word embedding technique for representing words, which keeps a separate vector for each word in the model and thus would be infeasible for mobile devices because of their large storage size, we use the hashed charactergram embedding. This technique represents the word as a set of all character subsequences of certain length. We use lengths 1 to 5. These strings are additionally hashed and mapped to a fixed number of buckets (see here for more details on the technique). As a result, the final model only stores vectors for each of the hash buckets, not each word/character subsequence, and can be kept small. The embedding matrix for the hashed charactergrams that we use has 20,000 buckets and 12 dimensions.
  • A binary feature that indicates whether the word starts with a capital letter. This is important for the network to know because the capitalization in postal addresses is quite distinct, and helps the networks to discriminate.
A Training Dataset
There is no obvious dataset for this task on which we could readily train the networks, so we came up with a training algorithm that generates synthetic examples out of realistic pieces. Concretely, we gathered lists of addresses, phone numbers and named entities (like product, place and business names) and other random words from the Web (using Schema.org annotations), and use them to synthesize the training data for the neural networks. We take the entities as they are and generate random textual contexts around them (from the list of random words on Web). Additionally, we add phrases like “Confirmation number:” or “ID:” to the negative training data for phone numbers, to teach the network to suppress phone number matches in these contexts.

Making it Work
There are a number of additional techniques that we had to use for training the network and making a practical mobile deployment:
  • Quantizing the embedding matrix to 8 bits. We found that we could reduce the size of the model almost 4x without compromising the performance, by quantizing the embedding matrix values to 8-bit integers.
  • Sharing embedding matrices between the selection and classification networks. This brings almost no loss and makes the model 2x smaller.
  • Varying the size of the context before/after the entities. On mobile screens text is often short, with not enough context, so the network needs to be exposed to this during training as well.
  • Creating artificial negative examples out of the positive ones for the classification network. For example for the positive example: “call me 857 555-3556 today” with a label “phone” we generate “call me 857 555-3556 today” as a negative example with a label “other”. This teaches the classification network to be more precise about the entity span. Without doing this, the network would be merely a detector whether there is a phone number somewhere in the input, regardless of the span.
Internationalization is Important
The automatic data extraction we use makes it easier to train language-specific models. However, making them work for all languages is a challenge, requiring careful checking of language nuance by experts, as well as having an acceptable amount of training data. We found that having one model for all Latin-script languages works well (e.g. Czech, Polish, German, English), with individual models for each of Chinese, Japanese, Korean, Thai, Arabic and Russian. While Smark Linkify currently supports 16 languages, we are experimenting with models that support even more languages, which is especially challenging given the mobile model size constraints and trickiness with languages that do not split words on spaces.

Next Steps
While the technique described in this post enables the fast and accurate annotation of phone numbers and postal addresses in text, the recognition of flight numbers, date and time, or IBAN, is currently implemented with a more traditional technique using standard regular expressions. However, we are looking into creating ML models for date and time as well, particularly for recognizing informal relative date/time specifications prevalent in messaging context, like “next Thursday” or “in 3 weeks”.

The small model and binary size as well as low latency are very important for mobile deployment. The models and the code we developed are available open-source as part of Android framework. We believe that the architecture could extend to other on-device text annotation problems and we look forward to seeing new use cases from our developer community!

Source: Google AI Blog



Announcing the Mediation Test Suite Beta

$
0
0

Today we're announcing the release of Mediation Test Suite Beta. Mediation Test Suite is a lightweight SDK that enables Google AdMob publishers to easily test mediation ad network integrations without having to make changes in the AdMob UI, saving you and your developers time. It is available on Android, iOS, and Unity.

Mediation Test Suite allows you to:

  • View a full list of mediation ad source configurations for your app
  • Automatically check your project for missing SDKs, adapters, and manifest changes required by partner ad sources
  • Load a banner, interstitial, rewarded, or native ad for any ad source using a certified Google Mobile Ads SDK implementation
  • Batch test multiple ad sources for the same ad unit
  • Test both open source mediation adapters and custom event adapters

Integrating Mediation Test Suite is easy -- once you have the SDK imported, it can be launched with just a single line of code. All you need is your AdMob app ID.

On Android, the launch code looks like this:

import com.google.android.ads.mediationtestsuite.MediationTestSuite;
...
String appId = "YOUR-ADMOB-APP-ID";
MediationTestSuite.launch(MainActivity.this, appId);

On iOS, all that's required is importing the correct header and launching the Test Suite:

#import "GoogleMobileAdsMediationTestSuite.h"
...
NSString* appId = @"YOUR-ADMOB-APP-ID"
[GoogleMobileAdsMediationTestSuite presentWithAppID:appId
onViewController:self delegate:nil];

Unity is just as simple, but please note that you need to use the appropriate app ID for your platform:

using GoogleMobileAdsMediationTestSuite.Api;
...
#if UNITY_ANDROID
string appId = "YOUR-ANDROID-ADMOB-APP-ID";
#elif UNITY_IPHONE
string appId = "YOUR-iOS-ADMOB-APP-ID";
#else
string appId = "";
#endif
MediationTestSuite.Show(appId);

Including Mediation Test Suite in production builds is optional

You are not required to keep the Mediation Test Suite library in the production release of your app; however, you may choose to leave it in and hide it behind a debug gesture. Doing so enables you to launch Mediation Test Suite within your production build.

You can find more information about how to use Mediation Test Suite in the developer guide (Android | iOS | Unity). Remember that Mediation Test Suite is a beta product, so if you have any questions or feedback, please contact us on the developer forum.

Google releases source for Google I/O 2018 for Android

$
0
0

Posted by Shailen Tuli, DPE

Today we're releasing the source code for the official Google I/O 2018 for Android app.

The 2018 version constitutes a comprehensive rewrite of the app. For many years, the app has used a ContentProvider + SyncAdapter architecture. This year, we rewrote the app using Architecture Components and brought the code in sync with the Android team's current recommendations for building modern apps.

Architecture

We followed the recommendations laid out in the Guide to App Architecture for writing modular, testable and maintainable code when deciding on the architecture for the app. We kept logic away from Activities and Fragments and moved it to ViewModels. We observed data using LiveData and used the Data Binding Library to bind UI components in layouts to the app's data sources.

The overall architecture of the app can be summarized in this diagram:

We used a Repository layer for handling data operations. IOSched's data comes from a few different sources — user data is stored in Cloud Firestore (either remotely or in a local cache for offline use), user preferences and settings are stored in SharedPreferences, conference data is stored remotely and is fetched and stored in memory for the app to use — and the repository modules are responsible for handling all data operations and abstracting the data sources from the rest of the app. If we ever wanted to swap out the Firestore backend for a different data source in the future, our architecture allows us to do so in a clean way.

We implemented a lightweight domain layer, which sits between the data layer and the presentation layer, and handles discrete pieces of business logic off the UI thread. Examples.

We used Dagger2 for dependency injection and we heavily relied on dagger-android to abstract away boilerplate code.

We used Espresso for basic instrumentation tests and JUnit and Mockito for unit testing.

Firebase

The use of Firebase technologies has grown in the app as the Firebase platform has matured. The 2018 version uses the following Firebase components:

  • Cloud Firestore is our source for all user data (events starred or reserved by a user). Firestore gave us automatic sync and also seamlessly managed offline functionality for us.
  • Firebase Cloud Functions allowed us to run backend code. The reservations feature heavily depended on Functions checking a user's status (only attendees were allowed to make reservations), checking space availability and persisting reservation status in Firestore.
  • Firebase Cloud Messaging let us inform the app about changes to conference data on our server. Conference data is mostly static, but it does change from time to time, especially after the keynote. The app has traditionally used a ping-and-fetch model when working with conference data, and we retained that usage this year.
  • Remote Config helped us manage in-app constants. In previous years, we had found ourselves unable to inform users when data not directly related to the conference schedule — WiFi information, conference shuttle schedule, etc. — changed unexpectedly. Remote Config helped us update such values in a lightweight manner.

Kotlin

We made an early decision to rewrite the app from scratch to bring it in line with modern Android architecture. Using Kotlin for the rewrite was an easy choice: we loved Kotlin's expressive, concise, and powerful syntax; we found that Kotlin's support for safety features including nullability and immutability made our code more resilient; and we leveraged the enhanced functionality provided by Android Ktx extensions.

Material Design

At I/O 2018, the Material Design team announced Material Theming, giving apps much greater ability to customize Material Design to bring more of their product's brand. As we launched the app before Material Theming, we couldn't use all of the new components but we managed to sneak a couple in like the new Bottom App Bar with inset Floating Action Button and we were able to incorporate a lot of the conference's branding elements.

Future plans

The rewrite of the app brings the code in sync with Android's opinionated recommendations about building apps, and it resulted in a cleaner, more maintainable codebase. We'll continue working on the app, incorporating JetPack components as they become available and finding opportunities to showcase platform features that are good fits for the app. Developers can follow changes to the code on GitHub.

Android 9 Pie (Go edition): New features and more options this fall

$
0
0

We believe everyone across the globe should have powerful, high-quality device experiences. That’s why we introduced Android (Go edition) last year, with the goal to provide a fast and smooth experience optimized for first-time and entry-level smartphone owners.

We welcomed our first wave of Android (Go edition) phones this April, and now there are more than 200 devices available in 120+ countries including India, South Africa, U.S., Nigeria and Brazil. We also continued to refine the core operating system and have added a variety of new useful features to apps like Google Go, YouTube Go, Files Go and more.

Whether it comes with a HD or regular VGA screen, 4GB or 8GB or 16GB of storage, or 3G or 4G support, there’s a Go edition device for everyone. In some countries, devices are available for as little as $30 USD. With more than 100 manufacturers planned to release devices before the end of the year, you can expect even more options when choosing your first Go edition device.

Go phones

Android 9 Pie (Go edition)

With Android (Go edition), we aim to bring the latest Android improvements to more entry-level phone buyers. As part of the release of Android 9 Pie, we’re introducing a brand new Go edition experience. Pie (Go edition) includes:

  • Up to an additional 500MB of storage available out of the box

  • Faster device boot times

  • Top-of-the-line security features like verified boot

  • A accessible dashboard for tracking and monitoring data consumption

Comparing OOB storage across Pie, Oreo and non Go phones

Android Pie (Go edition) comes with up to an additional 500MB out of the box compared to Oreo (Go edition), and more than twice what you’d find on a non-Go edition phone

Collectively, these features help solve some of the most common pain points for entry-level device owners: storage, performance, data management and security. Keep an eye out for the first devices offering the new Pie (Go edition) experience to hit shelves later this fall.

Go with Google

A core part of the Go edition experience is the fully redesigned set of Google apps, which are specifically built to serve the needs of first-time smartphone owners. These apps include unique features, like free downloading in YouTube Go, that aren’t found in the classic app. Since February we’ve introduced a number of improvements to our suite of apps:

  • Google Go now offers the ability to read webpages aloud and highlights each word so you can follow along.
Google Go
  • YouTube Go makes it easier to enjoy videos while using less data with new features like gallery mode for downloaded content.

  • Maps Go now features navigation, making it possible for people with Go edition devices or unstable connections to use turn-by-turn directions whether you’re traveling by car, by bus, or on foot.

maps go navigation
  • Files Go, which has saved users ~90TB of space since launch, is now capable of transferring data peer-to-peer, without using mobile data, at speeds up to ~490Mbips.

  • Assistant Go now supports additional languages including Spanish, Brazilian Portuguese and Indonesian, and has expanded support for device actions like controlling Bluetooth, camera and flashlight, and added reminders.

  • Android Messages App for Android (Go edition) is now ~50 percent smaller in size and the Phone App includes caller ID and spam detection.

With a broader range of options and better performing phones, more people can come online for the first time and have access to essential, helpful information. We’re excited to keep the momentum going.

Source: Android


Updating Wear OS Google Play Store policy to increase app quality

$
0
0

Posted by Hoi Lam, Lead Developer Advocate, Wear OS by Google

Today we are announcing a new initiative to improve Wear app quality and their presentation in the Google Play Store. The Wear app review process, which has been in place since the launch of Android Wear 2.0, is currently optional. It will become mandatory for apps to be listed on the Wear OS by Google version of the Google Play Store from the following dates:

  • New Wear apps: 1 October 2018
  • Existing Wear apps: 4 March 2019.

The review process for mobile apps remains unchanged, and is independent of the Wear app review. Mobile app updates will not be blocked if they fail the Wear app review.

We hope this lightweight app review process will improve the quality of Wear app experiences across the wide range of devices available to your users. In addition, since screenshots are required for the Wear app review, this will improve the discovery and presentation of your Wear apps in the Google Play Store.

See a comprehensive list of review criteria here. The following are common issues we see during Wear app reviews:

  • Support for different screen types - Wear OS by Google is available in both round and square screens, and some round devices also have a chin. Developers are advised to test on all screen types. If a physical device is unavailable, please use the Wear OS by Google emulator.
  • Wear OS by Google app screenshot - To pass the review, the app needs to have at least one Wear OS app screenshot. To keep pre-release Wear apps private, the Google Play Store will not show the Wear screenshots unless the Wear App is in production or open testing. Currently, the Google Play Store only supports uploading one set of screenshots across all production and test versions. For existing Wear apps, we recommend developers keeping their production Wear app screenshots unchanged when uploading new open test or closed test Wear apps.

Opting out of app review for early prototypes

We understand that some developers need to experiment with their Wear apps in the early stages of app development, and a Wear app review at this stage might not be appropriate. In this case, developers have two options:

Please note that the open test and closed test channels will be subject to Wear app review to help front-load the quality assurance process and to avoid leaving reviews to the last minute.

Thank you for your continuing support of Wear OS by Google.

Streaming support spec for hearing aids on Android

$
0
0

Posted by Seang Chau, Vice President, Engineering

According to the World Health Organization1, around 466 million people worldwide have disabling hearing loss. This number is expected to increase to 900 million people by the year 2050. Google is working with GN Hearing to create a new open specification for hearing aid streaming support on future versions of Android. Users with hearing loss will be able to connect, pair, and monitor their hearing aids so they can hear their phones loudly and clearly.

Hearing aid users expect a high quality, low latency experience with minimal impact on phone and hearing aid battery life. We've published a new hearing aid spec for Android smartphones: Audio Streaming for Hearing Aids (ASHA) on Bluetooth Low Energy Connection-Oriented Channels. ASHA is designed to have a minimal impact on battery life with low-latency while maintaining a high quality audio experience for users who rely on hearing aids. We look forward to continually evolving the spec to even better meet the needs of our users.

The spec details the pairing and connectivity, network topology, system architecture, and system requirements for implementing hearing aids using low energy connection-oriented channels. Any hearing aid manufacturer can now build native hearing aid support for Android.

The protocol specification is available here.

Viewing all 1776 articles
Browse latest View live