Mobile first to AI first

From Mobile first to AI first — Google I/O 2017 conference.

Announcements presented at Google I/O developer conference 2017 enhanced the real meaning of Google’s mission:

to organize the world’s information and make it universally accessible and useful.

Google make transition from mobile first to AI first, which means that main users problems will be solved through Artificial Intelligence. Moreover, all actions taken by company democratize the knowledge and make solutions easily accessible for each developer. So companies can widely use tools which were presented. That is the reason of upcoming changes for business and for all of us.

We have made summary of Google I/O first day, so you can check some of the news:

  • Google Lens — this is smart technology that can recognize what user is looking at. It can extend the user’s knowledge about the object or help visually impaired person to get the idea of objects around. “Google Lens is a set of vision based on computing capabilities, that can understand what you’re looking at,” explained CEO Sundar Pichai. Thanks to integration of Tango which recognizes environment and Google Maps it is possible to scan locations through smartphone and get the information about the place. User can make a better choice by getting necessary set of information like e.g. restaurant reviews, menu, additional services etc. (it is called Google Visual Positioning).
  • Google Photos — is able to identify the best photos and select them to make a photo book. It also has ability to recognize who is on the photo and suggest to share the files with indicated persons. This idea of instant communication is even enhanced by shared libraries where users can share their files instantly while uploading.
  • — this represents community for sharing AI solutions and making improvements on specific technologies. Its aim is to spread the usage of AI solutions in business reality.
  • Tensor Processing Units — this is a second generation of TPU chips which makes machine learning faster and more efficient. It provides up to 180 teraflops of performance (Floating Point Operation Per Second) which means that speed and accuracy improve dramatically in comparison to current processing units (e.g. NVIDIA Volta GPU dedicated for AI processing has 100 teraflops). Google wrote “To put this into perspective, our new large-scale translation model takes a full day to train on 32 of the world’s best commercially available GPU’s (Graphics Processing Unit) — while one 1/8th of a TPU pod can do the job in an afternoon.”
  • Google Assistant — available on Google Home and Pixel Phone can help you in various queries. It has similar functionalities to Siri (available on iOS devices) but is more powerful due to some extensions. It lets you ask more complicated queries and has third-party integrations. Google Assistant also lets you control connected devices and makes it possible to command via sound or typing. Moreover it is available also on iOS system and soon will implement variety of languages.
  • Android O Beta — this version of Android enables users to ease usage by new utilities. Users can make autofill to make authentication process faster. System has picture-in-picture support for watching videos while doing other tasks, notifications updates, protection of battery usage, possibility of spotting security issues and others. Also Android O boot time accelerates twice so apps perform better.
  • Google for Jobs — uses AI to match applicants with jobs offers. With more searches it learns preferences and gives more suitable results. Users get also some additional information as for example commuting time, so they can consider all additional circumstances.
  • Daydream VR headset- gives possibility to play without connection to a games console, a computer, or even a smartphone. It allows users to walk around in virtual worlds without fear of real-world obstacles thanks to sensors and cameras. Google will deliver Qualcomm-based reference design to hardware makers, so it won’t deliver its own device.
  • 360 YT videos: 360-degree video is coming to YouTube’s TV app where with the remote user can play around the video.
  • Kotlin — Google announced Kotlin as the first class language for making Android apps. Kotlin is a modern programming language (similar to Apple’s Swift) which is easy to learn and makes Android development more pleasant. It leverages existing libraries, JVM, Android and the browser with 100% compatibility. Its idea is to “make Kotlin a uniform tool for end-to-end development of various applications bridging multiple platforms with the same language” (Blog Jetbrains).

Video shortcut of conference from the Verge:

Everything becomes smarter, more automated and customized. It seems that users will get the answers instantly with much less effort. Convenience of communication becomes the main direction of development. So now the question is — how business will use the opportunities. Certainly soon we will see the split of companies to those that designate the direction of development and those that drop behind.

Some information about Google statistics from Sundar Pichai — Chief Executive Officer at Google:

  • Over 1 billion hours of video watched each day on YouTube
  • Over 1 billion kilometers checked on Google Maps every day
  • Over 800 million monthly active users on Google Drive
  • More than 500 million active users on Google Photos
  • 1.2 billion photos uploaded to Google every day
  • 2 billion active users of Android globally

Posted by 홍반장水 홍반장水

VR 멀미를 일으키는 원인은 다양합니다. 

먼저 멀미는 이동과 평형감각을 주관하는 전정기관(세반고리관)과 시각 정보가 불일치할 때 발생합니다. 쉽게 말해, 몸은 가만히 있는데 시야가 계속 변해서 생깁니다. 우리가 흔히 경험하는 차멀미, 배멀미, 비행기 멀미 모두 같습니다. VR 기기는 우리의 뇌를 속여 최대한 감각 불일치되는 상황을 줄여야 합니다.

대부분의 VR 기기는 머리를 움직이면 화면이 따라가는 헤드트래킹을 지원합니다. VR 콘텐츠는 몰입도가 중요해 1인칭 시점을 많이 선택하기 때문에 꼭 필요한 기능입니다. VR 기기는 3D 화면을 실시간으로 렌더링해 표현합니다. 왼쪽 눈과 오른쪽 눈, 양쪽을 동시에 처리해야 하므로 높은 성능이 필요합니다.

만약 하드웨어 연산 능력이 충분하지 않다면 시점을 이동할 때 지연시간이 발생합니다. 고개를 돌리고 3D로 표현되는 데까지 미묘하게 시차가 발생하는 것입니다. 이 시차로 인해 점차 피로가 누적되면서 VR 멀미가 일어날 수 있습니다. 소니에서 ‘플레이스테이션 VR’를 발매하고 더 나은 VR 환경을 위해 곧이어 ‘플레이스테이션4 프로’를 발표한 이유 중 하나입니다.

VR 멀미를 극복하는 방법은?

좀 더 쾌적한 VR 환경을 위해서는 응답 속도 20ms 이하, 8~16K의 해상도, 90~120프레임레이트까지 표현할 수 있어야 한다고 합니다. 기술적으로 만들 수 있어도, 실제 상용화된 제품을 만나기까지는 시간이 걸릴 것으로 전망하고 있습니다. 시장의 수요와 대중화 정도를 고려해 적정 가격으로 출시해야 하기 때문입니다. 그럼 앞서 이야기한 멀미약을 먹는 방법밖에는 없는 걸까요?

가장 좋은 방법은 적절한 휴식을 갖는 것입니다. 처음에는 20~30분에 한 번씩 10분 정도 휴식시간을 가지는 것이 좋습니다. 멀미는 개인차가 있습니다. VR도 많이 사용해볼수록 익숙해진다고 합니다. 조금씩 사용 시간을 늘려가는 것이 좋습니다. 

최근 VR 콘텐츠 스토어에는 구매정보에 멀미 등급을 표시합니다. 

VR 멀미 증상이 있다면 참고하는 것이 좋습니다.

Posted by 홍반장水 홍반장水

A-Frame: A framework for the virtual reality web



For the core library, check out A-Frame Core.

Building blocks for the VR Web.

  • Virtual Reality: Drop in the library and have a WebVR scene within a few lines of markup.
  • Based on the DOM: Manipulate with JavaScript, use with your favorite libraries and frameworks.
  • Entity-Component System: Use the entity-component system for better composability and flexibility.

Find out more:



A-Frame is a framework for building things for the virtual reality web. You can use markup to create VR experiences that work across desktop, iPhones, and the Oculus Rift.








Posted by 홍반장水 홍반장水