네이버가 올해 11월부터 네이버플러스 멤버십 회원 대상으로 넷플릭스 이용권을 제공한다고 30일 밝혔다.
사진 제공 : 네이버 네이버와 넷플릭스의 제휴를 통해 네이버 멤버십 회원은 월 4900원의 구독료로 디지털 콘텐츠 혜택 중 하나로 '넷플릭스 광고형 스탠다드 요금제'를 선택해 이용 가능하다. ‘넷플릭스 광고형 스탠다드 이용권’은 Full HD, 동시 접속 2인, 모바일 게임 무제한, 콘텐츠 저장 등 스탠다드 요금제와 품질은 같으면서 콘텐츠 시청 시 일부 광고를 시청하게 되는 상품으로, 네이버플러스 멤버십 회원은 넷플릭스 광고형 스탠다드와 동일한 품질로 다양한 장르의 콘텐츠를 시청할 수 있다.
또한 네이버플러스 멤버십 회원에게는 넷플릭스 상품과 마찬가지로 업그레이드할 수 있는 옵션도 함께 제공된다. 구체적으로 8600원 추가 지불 시 스탠다드 요금제로 업그레이드, 12100원 추가 지불하는 경우 프리미엄 요금제으로 업그레이드 가능하다.
양사는 이번 제휴로 다양한 시너지 효과를 기대하고 있다. 네이버는 멤버십 회원에게 콘텐츠를 다수 제공하여 사용자 효용 가치를 높이고, 넷플릭스 네이버 멤버십 회원과 콘텐츠 상품의 접점을 확보하겠다는 전략이다.
네이버는 보도자료를 통해 “국내 IT 플랫폼 멤버십 서비스 중 넷플릭스 이용권을 제공하는 것은 네이버플러스 멤버십이 최초”라고 설명했다. 네이버와 넷플릭스는 사용자들의 만족도를 극대화하기 위한 다양한 협업도 모색할 계획이다.
네이버멤버십 정한나 리더는 “네이버 멤버십의 다양하고 유연한 혜택 설계는 사용자들의 선택권을 확대하고 체감 혜택을 향상시켜 높은 리텐션을 유지할 수 있는 배경으로, 이는 협업 파트너와 함께 성장하는 시너지로도 이어지고 있다”라며 “넷플릭스와 협력을 통해 멤버십 서비스의 콘텐츠 경쟁력과 다양성을 보다 강화하겠다”라고 말했다.
한편, 네이버플러스 멤버십과 넷플릭스 협업은 ▲배달 ▲영화관 ▲편의점에 이어 올해 네 번째 외부 제휴로, 네이버는 사용자들의 로열티 강화를 위해 외연 확장을 통한 혜택 다변화를 지속하고 있다. 네이버에 따르면, 네이버플러스 멤버십의 구독 유지율은 95%이다.https://www.ciokorea.com/news/351558
Since our previous posts regarding Content Engineering’s role in enabling search functionality within Netflix’s federated graph (the first post, where we identify the issue and elaborate on the indexing architecture, and the second post, where we detail how we facilitate querying) there have been significant developments. We’ve opened up Studio Search beyond Content Engineering to the entirety of the Engineering organization at Netflix and renamed it Graph Search. There are over 100 applications integrated with Graph Search and nearly 50 indices we support. We continue to add functionality to the service. As promised in the previous post, we’ll share how we partnered with one of our Studio Engineering teams to build reverse search. Reverse search inverts the standard querying pattern: rather than finding documents that match a query, it finds queries that match a document.
Intro Tiffany is a Netflix Post Production Coordinator who oversees a slate of nearly a dozen movies in various states of pre-production, production, and post-production. Tiffany and her team work with various cross-functional partners, including Legal, Creative, and Title Launch Management, tracking the progression and health of her movies.
So Tiffany subscribes to notifications and calendar updates specific to certain areas of concern, like “movies shooting in Mexico City which don’t have a key role assigned”, or “movies that are at risk of not being ready by their launch date”.
Tiffany is not subscribing to updates of particular movies, but subscribing to queries that return a dynamic subset of movies. This poses an issue for those of us responsible for sending her those notifications. When a movie changes, we don’t know who to notify, since there’s no association between employees and the movies they’re interested in.
We could save these searches, and then repeatedly query for the results of every search, but because we’re part of a large federated graph, this would have heavy traffic implications for every service we’re connected to. We’d have to decide if we wanted timely notifications or less load on our graph.
If we could answer the question “would this movie be returned by this query”, we could re-query based on change events with laser precision and not impact the broader ecosystem.
The Solution Graph Search is built on top of Elasticsearch, which has the exact capabilities we require:
percolator fields that can be used to index Elasticsearch queries percolate queries that can be used to determine which indexed queries match an input document.
Instead of taking a search (like “spanish-language movies shot in Mexico City”) and returning the documents that match (One for Roma, one for Familia), a percolate query takes a document (one for Roma) and returns the searches that match that document, like “spanish-language movies” and “scripted dramas”.
We’ve communicated this functionality as the ability to save a search, called SavedSearches, which is a persisted filter on an existing index.
type SavedSearch { id: ID! filter: String index: SearchIndex! } That filter, written in Graph Search DSL, is converted to an Elasticsearch query and indexed in a percolator field. To learn more about Graph Search DSL and why we created it rather than using Elasticsearch query language directly, see the Query Language section of “How Netflix Content Engineering makes a federated graph searchable (Part 2)”.
We’ve called the process of finding matching saved searches ReverseSearch. This is the most straightforward part of this offering. We added a new resolver to the Domain Graph Service (DGS) for Graph Search. It takes the index of interest and a document, and returns all the saved searches that match the document by issuing a percolate query.
""" Query for retrieving all the registered saved searches, in a given index, based on a provided document. The document in this case is an ElasticSearch document that is generated based on the configuration of the index. """ reverseSearch( after: String, document: JSON!, first: Int!, index: SearchIndex!): SavedSearchConnection Persisting a SavedSearch is implemented as a new mutation on the Graph Search DGS. This ultimately triggers the indexing of an Elasticsearch query in a percolator field.
""" Mutation for registering and updating a saved search. They need to be updated any time a user adjusts their search criteria. """ upsertSavedSearch(input: UpsertSavedSearchInput!): UpsertSavedSearchPayload Supporting percolator fields fundamentally changed how we provision the indexing pipelines for Graph Search (see Architecture section of How Netflix Content Engineering makes a federated graph searchable). Rather than having a single indexing pipeline per Graph Search index we now have two: one to index documents and one to index saved searches to a percolate index. We chose to add percolator fields to a separate index in order to tune performance for the two types of queries separately.
Elasticsearch requires the percolate index to have a mapping that matches the structure of the queries it stores and therefore must match the mapping of the document index. Index templates define mappings that are applied when creating new indices. By using the index_patterns functionality of index templates, we’re able to share the mapping for the document index between the two. index_patterns also gives us an easy way to add a percolator field to every percolate index we create.
Example of document index mapping
Index pattern — application_*
{ "order": 1, "index_patterns": ["application_*"], "mappings": { "properties": { "movieTitle": { "type": "keyword" }, "isArchived": { "type": "boolean" } } } Example of percolate index mappings
{ "application_v1_percolate": { "mappings": { "_doc": { "properties": { "movieTitle": { "type": "keyword" }, "isArchived": { "type": "boolean" }, "percolate_query": { "type": "percolator" } } } } } } Percolate Indexing Pipeline The percolate index isn’t as simple as taking the input from the GraphQL mutation, translating it to an Elasticsearch query, and indexing it. Versioning, which we’ll talk more about shortly, reared its ugly head and made things a bit more complicated. Here is the way the percolate indexing pipeline is set up.
See Data Mesh — A Data Movement and Processing Platform @ Netflix to learn more about Data Mesh. When SavedSearches are modified, we store them in our CockroachDB, and the source connector for the Cockroach database emits CDC events. A single table is shared for the storage of all SavedSearches, so the next step is filtering down to just those that are for *this* index using a filter processor. As previously mentioned, what is stored in the database is our custom Graph Search filter DSL, which is not the same as the Elasticsearch DSL, so we cannot directly index the event to the percolate index. Instead, we issue a mutation to the Graph Search DGS. The Graph Search DGS translates the DSL to an Elasticsearch query. Then we index the Elasticsearch query as a percolate field in the appropriate percolate index. The success or failure of the indexing of the SavedSearch is returned. On failure, the SavedSearch events are sent to a Dead Letter Queue (DLQ) that can be used to address any failures, such as fields referenced in the search query being removed from the index. Now a bit on versioning to explain why the above is necessary. Imagine we’ve started tagging movies that have animals. If we want users to be able to create views of “movies with animals”, we need to add this new field to the existing search index to flag movies as such. However, the mapping in the current index doesn’t include it, so we can’t filter on it. To solve for this we have index versions.
Dalia & Forrest from the series Baby Animal Cam When a change is made to an index definition that necessitates a new mapping, like when we add the animal tag, Graph Search creates a new version of the Elasticsearch index and a new pipeline to populate it. This new pipeline reads from a log-compacted Kafka topic in Data Mesh — this is how we can reindex the entire corpus without asking the data sources to resend all the old events. The new pipeline and the old pipeline run side by side, until the new pipeline has processed the backlog, at which point Graph Search cuts over to the version using Elasticsearch index aliases.
Creating a new index for our documents means we also need to create a new percolate index for our queries so they can have consistent index mappings. This new percolate index also needs to be backfilled when we change versions. This is why the pipeline works the way it does — we can again utilize the log compacted topics in Data Mesh to reindex the corpus of SavedSearches when we spin up a new percolate indexing pipeline.
We persist the user provided filter DSL to the database rather than immediately translating it to Elasticsearch query language. This enables us to make changes or fixes when we translate the saved search DSL to an Elasticsearch query . We can deploy those changes by creating a new version of the index as the bootstrapping process will re-translate every saved search. Another Use Case We hoped reverse search functionality would eventually be useful for other engineering teams. We were approached almost immediately with a problem that reverse searching could solve.
The way you make a movie can be very different based on the type of movie it is. One movie might go through a set of phases that are not applicable to another, or might need to schedule certain events that another movie doesn’t require. Instead of manually configuring the workflow for a movie based on its classifications, we should be able to define the means of classifying movies and use that to automatically assign them to workflows. But determining the classification of a movie is challenging: you could define these movie classifications based on genre alone, like “Action” or “Comedy”, but you likely require more complex definitions. Maybe it’s defined by the genre, region, format, language, or some nuanced combination thereof. The Movie Matching service provides a way to classify a movie based on any combination of matching criteria. Under the hood, the matching criteria are stored as reverse searches, and to determine which criteria a movie matches against, the movie’s document is submitted to the reverse search endpoint.
In short, reverse search is powering an externalized criteria matcher. It’s being used for movie criteria now, but since every Graph Search index is now reverse-search capable, any index could use this pattern.
A Possible Future: Subscriptions Reverse searches also look like a promising foundation for creating more responsive UIs. Rather than fetching results once as a query, the search results could be provided via a GraphQL subscription. These subscriptions could be associated with a SavedSearch and, as index changes come in, reverse search can be used to determine when to update the set of keys returned by the subscription.
회원의 몰입도를 극대화하는 것은 Netflix 제품 및 엔지니어링 팀이 회원을 즐겁게 하고 콘텐츠에 완전히 몰입할 수 있도록 하는 중요한 목표입니다.결함 없는 인앱 전환으로 원활한 재생 경험을 제공하기 위해 성숙한 최신 클라이언트 장치 기술을 적절하게 조합하여 활용하는 것은 이 목표를 달성하기 위한 중요한 단계입니다.이 기사에서는 소비자 스트리밍 장치의 기능을 활용하여 회원에게 더 나은 시청 경험을 제공하기 위한 여정에 대해 설명합니다.
Roku 셋톱 박스(STB) 또는 Amazon FireTV 스틱과 같은 스트리밍 장치가 TV에 연결되어 있는 경우 장치 디스플레이 설정에서 콘텐츠 프레임 속도와 관련된 옵션을 보았을 수 있습니다.장치 제조업체는 종종 이 기능을 "콘텐츠 프레임 속도 일치", "디스플레이 재생률 자동 조정" 또는 이와 유사한 이름으로 부릅니다.이러한 기능이 무엇이며 어떻게 시청 환경을 개선할 수 있는지 궁금한 적이 있다면 계속 읽어보세요. 다음 섹션에서는 이 기능의 기본 사항을 다루고 Netflix 애플리케이션에서 이 기능을 사용하는 방법에 대해 자세히 설명합니다.
문제
Netflix의 콘텐츠 카탈로그는 초당 23.97에서 60 프레임(fps) 범위의 다양한 프레임 속도 중 하나로 캡처 및 인코딩된 비디오로 구성됩니다.회원이 소스 장치(예: 셋톱 박스, 스트리밍 스틱, 게임 콘솔 등)에서영화 또는 TV 프로그램을 보기로 선택하면 콘텐츠가 전달된 다음프레임인기본 프레임 속도로 디코딩됩니다. 디코딩 단계 후 소스 장치는 연결된 싱크 장치(TV, AVR, 모니터 등)의 HDMI 입력 포트 기능을 기반으로 구성된 HDMI 출력 프레임 속도로 변환합니다.일반적으로 HDMI를 통한 출력 프레임 속도는PAL지역의 경우 50fps,NTSC 의 경우 60fps로 자동 설정됩니다.지역.
Netflix는 높은 프레임 속도의 콘텐츠(50fps 또는 60fps)를 제한적으로 제공하지만 당사 카탈로그 및 시청 시간의 대부분은 23.97~30fps 콘텐츠를 시청하는 회원이 차지할 수 있습니다.이는 본질적으로 대부분의 경우 콘텐츠가 프레임을 복제하여HDMI 출력 프레임 속도와 일치하도록 기본 프레임 속도에서 콘텐츠를 변환하는 소스 장치에서프레임 속도 변환 (일명 FRC)이라는 프로세스를 거친다는 것을 의미합니다.그림 1은 24fps 콘텐츠를 60fps로 변환하는 간단한 FRC 알고리즘을 보여줍니다.
그림 1 : 24FPS 콘텐츠를 60FPS로 변환하는 3:2 풀다운 기법
콘텐츠를 변환하고 HDMI를 통해 출력 프레임 속도로 전송하는 것은 논리적이고 간단하게 들립니다.실제로 FRC는 출력 프레임 속도가 기본 프레임 속도의 정수배(예: 24→48, 25→50, 30→60, 24→120 등)일 때 잘 작동합니다.반면에 FRC는정수가 아닌 다중 변환이 필요한 경우(예: 24→60, 25→60 등)Judder 라는 시각적 아티팩트를 도입하며, 이는 아래 그림과 같이 고르지 못한 비디오 재생으로 나타납니다.