Commit f1d60797 authored by stitchEm's avatar stitchEm

Blessed are the cheesemakers!

parents

Too many changes to show.

To preserve performance only 1000 of 1000+ files are displayed.

---
BasedOnStyle: Google
ColumnLimit: 120
SortIncludes: false
*.sh eol=lf
*.version eol=lf
*.gpu linguist-language=OpenCL
*.cl linguist-language=OpenCL
*.incl linguist-language=OpenCL
*.inst linguist-language=C++
*.ptv linguist-language=JSON
*.vah linguist-language=JSON
## Summary
Which problem was solved with this PR? Why was it solved in this particular way?
## Review
What should be the area of code review? Where to start?
*Please assign code owners and possible reviewers to the pull request ("Reviewers" on the right side of this form)*
*Please assign developpers from other teams so that they keep up with evolution of the SW*
## Test coverage
- [ ] code already covered by the following tests:
- [ ] a new unit test is provided with this change set
- [ ] a new command line test is provided with this change set
- [ ] a new squish test is provided with this change set
- [ ] no test needed: ok but why?
external_deps/
bin/
*.gcda
*.gcno
CMakeLists.txt.user
*.dir/
# OS X
.DS_Store
.AppleDouble
.LSOverride
.project
.cproject
.settings/
.pydevproject
build*/
cmake_build/*
# CMake
CMakeCache.txt
CMakeFiles
CMakeScripts
Makefile
cmake_install.cmake
install_manifest.txt
CTestTestfile.cmake
# Ninja
.ninja_deps
.ninja_log
build.ninja
rules.ninja
# PyCharm
.idea/
# Auto generated test files
lib/src/test/ubjson.ubj
IO/src/decklink/include/DeckLinkAPI_h.h
IO/src/decklink/include/DeckLinkAPI_i.c
IO/src/decklink/DeckLinkAPI_i.c
Testing/Temporary/
lib/src/test/Testing/Temporary/
tests/Testing/Temporary/CTestCostData.txt
tests/Testing/Temporary/LastTest.log
*.bak
# Visual Studio autogenerated
*.vcxproj
*.vcxproj.filters
*.sln
*.opensdf
*.sdf
*.suo
# VIM
*.swp
*.un~
#Python
*.pyc
IO/src/decklink/DeckLinkAPI_i.c
This diff is collapsed.
# Set the default behavior, in case people don't have core.autocrlf set.
* text=auto
# Explicitly declare text files you want to always be normalized and converted
# to native line endings on checkout.
*.sh eol=lf
*.c text eol=lf
*.cu text eol=lf
*.cpp text eol=lf
*.hpp text eol=lf
*.hxx text eol=lf
# Declare files that will always have CRLF line endings on checkout.
*.sln text eol=crlf
*.vcxproj text eol=crlf
*.vcxproj.filters text eol=crlf
# Denote all files that are truly binary and should not be modified.
*.png binary
*.jpg binary
project(IO)
# safeguard against accidental misuse
if(NOT VIDEOSTITCH_CMAKE)
message(FATAL_ERROR "Please configure CMake from the root folder!")
endif(NOT VIDEOSTITCH_CMAKE)
# ----------------------------------------------------------------------------
# Helper macro to create a list of all I/O plugins
# ----------------------------------------------------------------------------
set(VS_IO_LIBRARIES)
macro(vs_add_IO_library lib_name)
add_library(${ARGV})
set(VS_IO_LIBRARIES
${VS_IO_LIBRARIES}
${lib_name}
PARENT_SCOPE)
set_property(TARGET ${lib_name} PROPERTY FOLDER "plugins")
add_cppcheck(${lib_name} VS)
endmacro()
# ----------------------------------------------------------------------------
# Global plugin compilation flags
# ----------------------------------------------------------------------------
if(MSVC)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4251")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /NODEFAULTLIB")
endif()
# ----------------------------------------------------------------------------
# Core plugin output directories
# ----------------------------------------------------------------------------
# Set plugin output dir for the generic single-config case (e.g. make, ninja)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${VS_PLUGIN_DIR})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${VS_PLUGIN_DIR})
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${VS_PLUGIN_DIR})
# Set plugin output dir for multi-config builds (e.g. MSVC, Xcode)
foreach(OUTPUTCONFIG ${CMAKE_CONFIGURATION_TYPES})
string(TOLOWER ${OUTPUTCONFIG} OUTPUTCONFIG_LOW)
string(TOUPPER ${OUTPUTCONFIG} OUTPUTCONFIG_UP)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY_${OUTPUTCONFIG_UP} ${VS_PLUGIN_DIR_${OUTPUTCONFIG_UP}})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_${OUTPUTCONFIG_UP} ${VS_PLUGIN_DIR_${OUTPUTCONFIG_UP}})
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY_${OUTPUTCONFIG_UP} ${VS_PLUGIN_DIR_${OUTPUTCONFIG_UP}})
set(VS_PLUGIN_DIR ${VS_OUT_DIR}/${OUTPUTCONFIG_LOW}/${VS_PLUGIN_DIR_NAME})
set(VS_VAHANA_PLUGIN_DIR ${VS_OUT_DIR}/${OUTPUTCONFIG_LOW}/${VS_VAHANA_PLUGIN_DIR_NAME})
set(VS_TEST_DIR ${VS_OUT_DIR}/${OUTPUTCONFIG_LOW})
set(VS_STUDIO_DIR ${VS_OUT_DIR}/${OUTPUTCONFIG_LOW}/${VS_STUDIO_DIR_NAME})
endforeach(OUTPUTCONFIG CMAKE_CONFIGURATION_TYPES)
# ----------------------------------------------------------------------------
# Core plugins
# ----------------------------------------------------------------------------
option(DISABLE_AV "Create AV I/O plugin" OFF)
option(DISABLE_BMP "Create BMP Input plugin" ON)
option(DISABLE_JPEG "Create JPEG I/O plugin" ${ANDROID})
option(DISABLE_TIFF "Create TIFF Output plugin" ${ANDROID})
option(DISABLE_MP4 "Create MP4 Input plugin" ${NANDROID})
if(${GPU_BACKEND_DEFAULT} STREQUAL CUDA OR NOT WINDOWS)
add_subdirectory(src/test)
endif(${GPU_BACKEND_DEFAULT} STREQUAL CUDA OR NOT WINDOWS)
add_subdirectory(src/common)
add_subdirectory(src/av)
add_subdirectory(src/bmp)
add_subdirectory(src/jpg)
add_subdirectory(src/mp4)
add_subdirectory(src/pam)
add_subdirectory(src/png)
add_subdirectory(src/raw)
add_subdirectory(src/tiff)
add_subdirectory(src/exr)
# ----------------------------------------------------------------------------
# Vahana plugin output directories
# ----------------------------------------------------------------------------
# Set plugin output dir for the generic single-config case (e.g. make, ninja)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${VS_VAHANA_PLUGIN_DIR})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${VS_VAHANA_PLUGIN_DIR})
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${VS_VAHANA_PLUGIN_DIR})
# Set plugin output dir for multi-config builds (e.g. MSVC, Xcode)
foreach(OUTPUTCONFIG ${CMAKE_CONFIGURATION_TYPES})
string(TOUPPER ${OUTPUTCONFIG} OUTPUTCONFIG_UP)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY_${OUTPUTCONFIG_UP} ${VS_VAHANA_PLUGIN_DIR_${OUTPUTCONFIG_UP}})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_${OUTPUTCONFIG_UP} ${VS_VAHANA_PLUGIN_DIR_${OUTPUTCONFIG_UP}})
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY_${OUTPUTCONFIG_UP} ${VS_VAHANA_PLUGIN_DIR_${OUTPUTCONFIG_UP}})
endforeach(OUTPUTCONFIG CMAKE_CONFIGURATION_TYPES)
# ----------------------------------------------------------------------------
# Vahana plugins
# ----------------------------------------------------------------------------
option(DISABLE_RTMP "Create RTMP I/O plugin" OFF)
option(DISABLE_PORTAUDIO "Create Portaudio I/O plugin" ${CMAKE_CROSSCOMPILING})
add_subdirectory(src/rtmp)
if(LINUX OR ANDROID)
add_subdirectory(src/v4l2)
endif(LINUX OR ANDROID)
if(LINUX OR WINDOWS)
add_subdirectory(src/portaudio)
endif(LINUX OR WINDOWS)
if(WINDOWS)
add_subdirectory(src/aja)
add_subdirectory(src/decklink)
add_subdirectory(src/magewell)
add_subdirectory(src/magewellpro)
add_subdirectory(src/ximea_2)
endif(WINDOWS)
# ----------------------------------------------------------------------------
# make I/O plugin list available to root CMake project
set(VS_IO_LIBRARIES ${VS_IO_LIBRARIES} PARENT_SCOPE)
# IO
This folder contains the IO plugins for VideoStitch applications. A plugin is an IO library that can be used by all the applications using the VideoStitch library.
Interfaces are defined in the VideoStitch library. Each plugin declares the used interfaces in its export.cpp file.
I/O plugins link against libvideostitch. They are loaded at runtime from libvideostitch.
A plugin is most the time an adapter using a third party library.
Two types of plugins are used:
* Core plugins: I/O from files, including raw and network streams. Used in Studio and Vahana VR.
* Vahana plugins: I/O from external hardware devices, trough acquisition cards. Used in Vahana VR.
The plugin documentation can be found on its folder `src/<plugin>/README.md`.
# Build
Building any I/O plugin can be turned off by globally by setting `BUILD_IO_PLUGINS=OFF`.
Indidividual plugins are turned off on some systems where 3rd party libraries may not be available, and can be disabled with these CMake flags:
| Option | Default | Comments |
|:------------------|:------------------------|:------------------------|
| DISABLE_AV | OFF | |
| DISABLE_BMP | ON | Used for debugging only |
| DISABLE_JPEG | ${ANDROID} | |
| DISABLE_TIFF | ${ANDROID} | |
| DISABLE_MP4 | ${NANDROID} | Uses Android Media SDK |
| DISABLE_RTMP | OFF | |
| DISABLE_PORTAUDIO | ${CMAKE_CROSSCOMPILING} | |
# safeguard against accidental misuse
if(NOT WINDOWS)
message(FATAL_ERROR "Aja for Windows only!")
endif(NOT WINDOWS)
set(PLUGIN_NAME aja_64)
set(SOURCE_FILES
export.cpp
ntv2Discovery.cpp
ntv2Helper.cpp
ntv2plugin.cpp
ntv2Reader.cpp
ntv2Writer.cpp
)
set(HEADER_FILES
ntv2Discovery.hpp
ntv2Helper.hpp
ntv2plugin.hpp
ntv2Reader.hpp
ntv2Writer.hpp
)
vs_add_IO_library(${PLUGIN_NAME} SHARED ${SOURCE_FILES} ${HEADER_FILES} $<TARGET_OBJECTS:common>)
target_compile_definitions(${PLUGIN_NAME} PRIVATE "MSWindows")
target_compile_definitions(${PLUGIN_NAME} PRIVATE "AJA_WINDOWS")
target_compile_definitions(${PLUGIN_NAME} PRIVATE "AJA_NO_AUTOIMPORT")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4005")
include_lib_vs_headers(${PLUGIN_NAME})
include_discovery_vs_headers(${PLUGIN_NAME})
target_include_directories(${PLUGIN_NAME} PRIVATE "${CMAKE_EXTERNAL_DEPS}/include/aja/includes")
target_include_directories(${PLUGIN_NAME} PRIVATE "${CMAKE_EXTERNAL_DEPS}/include/aja/ajaapi")
target_include_directories(${PLUGIN_NAME} PRIVATE "${CMAKE_EXTERNAL_DEPS}/include/aja/classes")
target_include_directories(${PLUGIN_NAME} PRIVATE "${CMAKE_EXTERNAL_DEPS}/include/aja/winclasses")
target_include_directories(${PLUGIN_NAME} PRIVATE ../common/include)
set_property(TARGET ${PLUGIN_NAME} PROPERTY CXX_STANDARD 14)
find_debug_and_optimized_library(AJA_STUFFS "aja/debug" "ajastuffdll_64d" "aja/release" "ajastuffdll_64")
find_debug_and_optimized_library(AJA_CLASSES "aja/debug" "classesDLL_64d" "aja/release" "classesDLL_64")
target_link_libraries(${PLUGIN_NAME} PRIVATE ${AJA_STUFFS} ${AJA_CLASSES} ${VS_DISCOVERY})
link_target_to_libvideostitch(${PLUGIN_NAME})
target_compile_definitions(${PLUGIN_NAME} PRIVATE NOMINMAX _USE_MATH_DEFINES)
# AJA documentation
`AJA` is an IO plugin for Vahana VR. It allows Vahana VR to capture audio/video
input and stream audio/video output with the AJA Corvid capture cards.
The plugin has been developed and tested with the following models:
* [AJA Corvid 88](https://www.aja.com/en/products/developer/corvid-88)
## Set-up
Drivers:
## Input configuration
The aja plugin can be used by Vahana VR through a .vah project file. Please see
the `*.vah file format specification` for additional details.
Define an input for each input on the capture cards. The `reader_config` member specifies how to read
it.
### Example
"inputs" : [
{
"width" : 1920,
"height" : 1080,
...
"reader_config" : {
"type" : "aja",
"name" : "01",
"device": 0,
"channel": 1,
"fps" : 30,
"pixel_format" : "UYVY",
"audio" : true,
}
...
},
{
"width" : 1920,
"height" : 1080,
...
"reader_config" : {
"type" : "aja",
"name" : "02",
"device": 0,
"channel" 2,
"fps" : 30,
"pixel_format" : "UYVY",
"audio" : false
}
...
}]
### Parameters
<table>
<tr><th>Member</th><th>Type</th><th>Value</th><th colspan="2"></th></tr>
<tr><td><strong>type</strong></td><td>string</td><td>aja</td><td colspan="2"><strong>Required</strong>. Defines an AJA input.</td></tr>
<tr><td><strong>name</strong></td><td>string</td><td>-</td><td colspan="2"><strong>Required</strong>. The AJA input entry name.</td></tr>
<br /> Note that these fields width the <code>width</code> and <code>height</code> fields must match exactly an existing display mode below.</td></tr>
<tr><td><strong> device</strong></td><td>int</td><td>-</td><td><strong>Required</strong>. The input card number (Starting from 0).</td></tr>
<tr><td><strong> channel</strong></td><td>int</td><td>-</td><td><strong>Required</strong>. The intpu channel for the selected card.</td></tr>
<tr><td><strong> fps</strong></td><td>double</td><td>-</td><td><strong>Required</strong>. The input framerate.</td></tr>
<tr><td> <strong>pixel_format</strong></td><td>string</td><td>-</td><td colspan="2"><strong>Required</strong>. The input pixel format. Supported values are <code>UYVY</code>, <code>YV12</code> and <code>BGRU</code>.</td></tr>
<tr><td><strong>audio</strong></td><td>bool</td><td>-</td><td colspan="2"><strong>Required</strong>. Does this reader capture audio.</td></tr>
</table>
## Output configuration
An example on how to use the device 1 in the channel 2 with audio enabled at 29.97 fps.
### Example
"outputs" : [
{
"type" : "aja",
"filename" : "12",
"device" : 1,
"channel" : 2,
"fps" : 29.97,
"pixel_format" : "UYVY",
"audio": true
}]
### Parameters
<table>
<tr><th>Member</th><th>Type</th><th>Value</th><th colspan="2"></th></tr>
<tr><td><strong>type</strong></td><td>string</td><td>aja</td><td colspan="2"><strong>Required</strong>. Defines an AJA output.</td></tr>
<tr><td><strong>filename</strong></td><td>string</td><td>-</td><td colspan="2"><strong>Required</strong>. AJA output identifier.</td></tr>
<br /> Note that these fields must match exactly an existing display mode below. You also want larger or equals <code>width</code> and a <code>height</code> than the pano's ones.</td></tr>
<tr><td><strong> device</strong></td><td>int</td><td>-</td><td><strong>Required</strong>. The output card number (Starting from 0).</td></tr>
<tr><td><strong> channel</strong></td><td>int</td><td>-</td><td><strong>Required</strong>. The output channel for the selected card.</td></tr>
<tr><td> <strong> fps</strong></td><td>double</td><td>-</td><td><strong>Required</strong>. The output framerate.</td></tr>
<tr><td> <strong> pixel_format</strong></td><td>string</td><td>-</td><td colspan="2"><strong>Required</strong>. The output pixel format. Supported values are <code>UYVY</code>, <code>YV12</code> and <code>BGRU</code>.</td></tr>
<tr><td> <strong> audio</strong></td><td>bool</td><td>-</td><td><strong>Required</strong>. The output audio is enabled or disabled.</td></tr>
<tr><td> <strong> offset_x</strong></td><td>bool</td><td>-</td><td><strong>Optional</strong>. The horizontal panorama offset within the display.</td></tr>
<tr><td> <strong> offset_y</strong></td><td>bool</td><td>-</td><td><strong>Optional</strong>. The vertical panorama offset within the display.</td></tr>
</table>
##Audio
We only support one audio mode for input and output:
48000 Hz 8 channels and 32 bits
## AJA supported display modes
The available display modes depend on the AJA cards.
For Corvid 88 you can get more information at https://www.aja.com/en/products/developer/corvid-88#techspecs
<table>
<tr><th>Mode name</th><th>Height</th><th>Width</th><th>Mode</th><th>Framerate</th></tr>
<tr><td>SD</td><td>525</td><td></td><td>i</td><td>29.97</td></tr>
<tr><td>SD</td><td>625</td><td></td><td>i</td><td>25</td></tr>
<tr><td>HD</td><td>720</td><td></td><td>p</td><td>50</td></tr>
<tr><td>HD</td><td>720</td><td></td><td>p</td><td>59.94</td></tr>
<tr><td>HD</td><td>720</td><td></td><td>p</td><td>60</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>i</td><td>25</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>i</td><td>29.97</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>i</td><td>30</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>PsF</td><td>23.98</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>PsF</td><td>24</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>PsF</td><td>29.97</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>PsF</td><td>30</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>p</td><td>23.98</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>p</td><td>24</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>p</td><td>25</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>p</td><td>29.97</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>p</td><td>30</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>p</td><td>50A/B</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>p</td><td>59.94A/B</td></tr>
<tr><td>HD</td><td>1080</td><td></td><td>p</td><td>60A/B</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>p</td><td>23.98</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>p</td><td>24</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>p</td><td>25</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>p</td><td>29.97</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>p</td><td>30</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>p</td><td>48A/B</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>p</td><td>50A/B</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>p</td><td>59.94A/B</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>p</td><td>60A/B</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>PsF</td><td>23.98</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>PsF</td><td>24</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>PsF</td><td>25</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>PsF</td><td>29.97</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>PsF</td><td>30</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>PsF</td><td>48A/B</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>PsF</td><td>50A/B</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>PsF</td><td>59.94A/B</td></tr>
<tr><td>2K</td><td>1080</td><td>2048</td><td>PsF</td><td>60A/B</td></tr>
<tr><td>4K</td><td>2160</td><td>3840</td><td>p</td><td>23.98</td></tr>
<tr><td>4K</td><td>2160</td><td>3840</td><td>p</td><td>24</td></tr>
<tr><td>4K</td><td>2160</td><td>3840</td><td>p</td><td>25</td></tr>
<tr><td>4K</td><td>2160</td><td>3840</td><td>p</td><td>29.97</td></tr>
<tr><td>4K</td><td>2160</td><td>3840</td><td>p</td><td>30</td></tr>
<tr><td>4K</td><td>2160</td><td>3840</td><td>p</td><td>48A/B</td></tr>
<tr><td>4K</td><td>2160</td><td>3840</td><td>p</td><td>50A/B</td></tr>
<tr><td>4K</td><td>2160</td><td>3840</td><td>p</td><td>59.94A/B</td></tr>
<tr><td>4K</td><td>2160</td><td>3840</td><td>p</td><td>60A/B</td></tr>
<tr><td>4K</td><td>2160</td><td>4096</td><td>p</td><td>23.98</td></tr>
<tr><td>4K</td><td>2160</td><td>4096</td><td>p</td><td>24</td></tr>
<tr><td>4K</td><td>2160</td><td>4096</td><td>p</td><td>25</td></tr>
<tr><td>4K</td><td>2160</td><td>4096</td><td>p</td><td>29.97</td></tr>
<tr><td>4K</td><td>2160</td><td>4096</td><td>p</td><td>30</td></tr>
<tr><td>4K</td><td>2160</td><td>4096</td><td>p</td><td>48A/B</td></tr>
<tr><td>4K</td><td>2160</td><td>4096</td><td>p</td><td>50A/B</td></tr>
<tr><td>4K</td><td>2160</td><td>4096</td><td>p</td><td>59.94A/B</td></tr>
<tr><td>4K</td><td>2160</td><td>4096</td><td>p</td><td>60A/B</td></tr>
</table>
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#include "export.hpp"
#include "libgpudiscovery/delayLoad.hpp"
#include "libvideostitch/plugin.hpp"
#include "libvideostitch/ptv.hpp"
#include "libvideostitch/logging.hpp"
#include "ntv2plugin.hpp"
#include "ntv2Discovery.hpp"
#include "ntv2Reader.hpp"
#include "ntv2Writer.hpp"
#ifdef DELAY_LOAD_ENABLED
SET_DELAY_LOAD_HOOK
#endif // DELAY_LOAD_ENABLED
extern "C" VS_PLUGINS_EXPORT VideoStitch::Potential<VideoStitch::Input::Reader>* __cdecl createReaderFn(
const VideoStitch::Ptv::Value* config, VideoStitch::Plugin::VSReaderPlugin::Config runtime) {
VideoStitch::Input::NTV2Reader* ntv2Reader =
VideoStitch::Input::NTV2Reader::create(runtime.id, config, runtime.width, runtime.height);
if (ntv2Reader) {
return new VideoStitch::Potential<VideoStitch::Input::Reader>(ntv2Reader);
}
return new VideoStitch::Potential<VideoStitch::Input::Reader>(
VideoStitch::Origin::Input, VideoStitch::ErrType::InvalidConfiguration, "Could not create Aja reader");
}
extern "C" VS_PLUGINS_EXPORT bool __cdecl handleReaderFn(const VideoStitch::Ptv::Value* config) {
return config && config->has("type") && config->has("type")->asString() == "aja";
}
/** \name Services for writer plugin. */
//\{
extern "C" VS_PLUGINS_EXPORT VideoStitch::Potential<VideoStitch::Output::Output>* createWriterFn(
VideoStitch::Ptv::Value const* config, VideoStitch::Plugin::VSWriterPlugin::Config run_time) {
VideoStitch::Output::Output* lReturn = nullptr;
VideoStitch::Output::BaseConfig baseConfig;
if (baseConfig.parse(*config).ok()) {
lReturn = VideoStitch::Output::NTV2Writer::create(*config, run_time.name, baseConfig.baseName, run_time.width,
run_time.height, run_time.framerate);
}
if (lReturn) {
return new VideoStitch::Potential<VideoStitch::Output::Output>(lReturn);
}
return new VideoStitch::Potential<VideoStitch::Output::Output>(
VideoStitch::Origin::Output, VideoStitch::ErrType::InvalidConfiguration, "Could not create Aja writer");
}
extern "C" VS_PLUGINS_EXPORT bool handleWriterFn(VideoStitch::Ptv::Value const* config) {
bool lReturn(false);
VideoStitch::Output::BaseConfig baseConfig;
if (baseConfig.parse(*config).ok()) {
lReturn = (!strcmp(baseConfig.strFmt, "aja"));
} else {
// TODOLATERSTATUS propagate config problem
VideoStitch::Logger::get(VideoStitch::Logger::Verbose) << "Invalid aja config encountered" << std::endl;
}
return lReturn;
}
extern "C" VS_PLUGINS_EXPORT VideoStitch::Plugin::VSDiscoveryPlugin* discoverFn() {
return VideoStitch::Plugin::Ntv2Discovery::create();
}
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#include "ntv2Discovery.hpp"
#include "libvideostitch/logging.hpp"
#include <thread>
#include <future>
#include <chrono>
#include <algorithm>
#include <locale>
#include <codecvt>
#include <ntv2utils.h>
using namespace VideoStitch;
using namespace Plugin;
Ntv2Discovery* Ntv2Discovery::create() {
std::vector<std::string> cards;
std::vector<std::shared_ptr<Device>> devices;
CNTV2DeviceScanner ajaDeviceScanner;
ajaDeviceScanner.ScanHardware();
size_t nbCard = ajaDeviceScanner.GetNumDevices();
if (nbCard == 0) return nullptr;
for (uint32_t iDevice = 0; iDevice < nbCard; ++iDevice) {
Device device;
device.boardInfo = ajaDeviceScanner.GetDeviceInfoList()[iDevice];
CNTV2Card card;
CNTV2DeviceScanner::GetDeviceAtIndex(iDevice, card);
cards.push_back(device.boardInfo.deviceIdentifier);
for (int8_t i = 0; i < device.boardInfo.numVidInputs; ++i) {
std::shared_ptr<InputDevice> inputDevice = std::make_shared<InputDevice>();
// Aja inputs are labeled from 1 to numVidInputs
inputDevice->pluginDevice.displayName = device.boardInfo.deviceIdentifier + " Input " + std::to_string(i + 1);
inputDevice->pluginDevice.name = std::to_string(device.boardInfo.deviceIndex) + std::to_string(i);
inputDevice->pluginDevice.type = Plugin::DiscoveryDevice::CAPTURE;
inputDevice->pluginDevice.mediaType = Plugin::DiscoveryDevice::MediaType::AUDIO_AND_VIDEO;
inputDevice->boardIdx = device.boardInfo.deviceIndex;
inputDevice->channelIdx = i;
inputDevice->boardInfo = device.boardInfo;
devices.push_back(inputDevice);
}
for (int8_t i = 0; i < device.boardInfo.numVidOutputs; ++i) {
std::shared_ptr<OutputDevice> outputDevice = std::make_shared<OutputDevice>();
// Aja outputs are labeled from 1 to numVidOutputs
outputDevice->pluginDevice.displayName = device.boardInfo.deviceIdentifier + " Output " + std::to_string(i + 1);
outputDevice->pluginDevice.name = std::to_string(device.boardInfo.deviceIndex) + std::to_string(i);
outputDevice->pluginDevice.type = Plugin::DiscoveryDevice::PLAYBACK;
outputDevice->pluginDevice.mediaType = Plugin::DiscoveryDevice::MediaType::AUDIO_AND_VIDEO;
outputDevice->boardIdx = device.boardInfo.deviceIndex;
outputDevice->channelIdx = i;
outputDevice->boardInfo = device.boardInfo;
devices.push_back(outputDevice);
}
}
return new Ntv2Discovery(cards, devices);
}
Ntv2Discovery::Ntv2Discovery(const std::vector<std::string>& cards, const std::vector<std::shared_ptr<Device>>& devices)
: m_cards(cards), m_devices(devices) {}
Ntv2Discovery::~Ntv2Discovery() {}
std::string Ntv2Discovery::name() const { return "aja"; }
std::string Ntv2Discovery::readableName() const { return "AJA"; }
std::vector<Plugin::DiscoveryDevice> Ntv2Discovery::inputDevices() {
std::vector<Plugin::DiscoveryDevice> pluginDevices;
for (auto it = m_devices.begin(); it != m_devices.end(); ++it) {
if ((*it)->pluginDevice.type == Plugin::DiscoveryDevice::CAPTURE) {
pluginDevices.push_back((*it)->pluginDevice);
}
}
return pluginDevices;
}
std::vector<Plugin::DiscoveryDevice> Ntv2Discovery::outputDevices() {
std::vector<Plugin::DiscoveryDevice> pluginDevices;
for (auto it = m_devices.begin(); it != m_devices.end(); ++it) {
if ((*it)->pluginDevice.type == Plugin::DiscoveryDevice::PLAYBACK) {
pluginDevices.push_back((*it)->pluginDevice);
}
}
return pluginDevices;
}
std::vector<string> Ntv2Discovery::cards() const { return m_cards; }
void Ntv2Discovery::registerAutoDetectionCallback(AutoDetection& autoDetection) {
return; // Incompatible with AJA Reader in its actual form, need a wrapper of the AJA SDK
}
DisplayMode Ntv2Discovery::currentDisplayMode(const DiscoveryDevice& device) {
const auto it = std::find_if(
m_devices.begin(), m_devices.end(),
[&device](const std::shared_ptr<Device>& m_device) -> bool { return device == m_device->pluginDevice; });
if (it != m_devices.end()) {
CNTV2Card card;
if (!CNTV2DeviceScanner::GetDeviceAtIndex((*it)->boardIdx, card)) {
return DisplayMode(0, 0, false, {1, 1});
}
const NTV2InputSource inputSource = GetNTV2InputSourceForIndex((*it)->channelIdx);
if (inputSource == NTV2_INPUTSOURCE_INVALID) {
return DisplayMode(0, 0, false, {1, 1});
}
const NTV2VideoFormat videoFormat = card.GetInputVideoFormat(inputSource);
return aja2vsDisplayFormat(videoFormat);
}
return DisplayMode(0, 0, false, {1, 1});
}
std::vector<DisplayMode> Ntv2Discovery::supportedDisplayModes(const Plugin::DiscoveryDevice& device) {
auto it = std::find_if(
m_devices.begin(), m_devices.end(),
[&device](const std::shared_ptr<Device>& m_device) -> bool { return device == m_device->pluginDevice; });
if (it != m_devices.end()) {
std::vector<DisplayMode> supportedDisplayModes;
CNTV2Card card;
CNTV2DeviceScanner::GetDeviceAtIndex((*it)->boardIdx, card);
NTV2VideoFormatSet outFormats;
card.GetSupportedVideoFormats(outFormats);
for (auto it = outFormats.begin(); it != outFormats.end(); ++it) {
const DisplayMode displayMode = aja2vsDisplayFormat((*it));
if (displayMode.width != 0 && displayMode.height != 0) {
supportedDisplayModes.push_back(displayMode);
}
}
std::sort(supportedDisplayModes.begin(), supportedDisplayModes.end());
return supportedDisplayModes;
} else {
return std::vector<DisplayMode>();
}
}
std::vector<PixelFormat> Ntv2Discovery::supportedPixelFormat(const Plugin::DiscoveryDevice& device) {
auto it = std::find_if(
m_devices.begin(), m_devices.end(),
[&device](const std::shared_ptr<Device>& m_device) -> bool { return device == m_device->pluginDevice; });
if (it != m_devices.end()) {
std::vector<PixelFormat> pixelFormats;
PixelFormat vsPF;
// iterate on enum with defined values following themselves : check ntv2enums.h
for (uint32_t i = NTV2_FBF_10BIT_YCBCR; i < NTV2_FBF_NUMFRAMEBUFFERFORMATS; ++i) {
if (NTV2DeviceCanDoFrameBufferFormat((*it)->boardInfo.deviceID, (NTV2FrameBufferFormat)i)) {
// convertPixelFormat((NTV2FrameBufferFormat)i, vsPF);
vsPF = aja2vsPixelFormat((NTV2FrameBufferFormat)i);
if (vsPF != Unknown) pixelFormats.push_back(vsPF);
}
}
return pixelFormats;
} else {
return std::vector<PixelFormat>();
}
}
std::vector<int> Ntv2Discovery::supportedNbChannels(const Plugin::DiscoveryDevice& /*device*/) {
std::vector<int> channels;
CNTV2DeviceScanner ajaDeviceScanner;
ajaDeviceScanner.ScanHardware();
for (uint32_t iDevice = 0; iDevice < ajaDeviceScanner.GetNumDevices(); ++iDevice) {
channels.push_back((int)ajaDeviceScanner.GetDeviceInfoList()[iDevice].numAudioStreams);
}
return channels;
}
std::vector<Audio::SamplingRate> Ntv2Discovery::supportedSamplingRates(const Plugin::DiscoveryDevice& /*device*/) {
std::vector<Audio::SamplingRate> rates;
CNTV2DeviceScanner ajaDeviceScanner;
ajaDeviceScanner.ScanHardware();
for (uint32_t iDevice = 0; iDevice < ajaDeviceScanner.GetNumDevices(); ++iDevice) {
NTV2AudioSampleRateList audioSampleRateList = ajaDeviceScanner.GetDeviceInfoList()[iDevice].audioSampleRateList;
for (auto it = audioSampleRateList.begin(); it != audioSampleRateList.end(); it++) {
Audio::SamplingRate sRate = convertSamplerate(*it);
if (sRate != Audio::SamplingRate::SR_NONE) rates.push_back(sRate);
}
}
return rates;
}
std::vector<Audio::SamplingDepth> Ntv2Discovery::supportedSampleFormats(const Plugin::DiscoveryDevice& /*device*/) {
std::vector<Audio::SamplingDepth> formats;
CNTV2DeviceScanner ajaDeviceScanner;
ajaDeviceScanner.ScanHardware();
for (uint32_t iDevice = 0; iDevice < ajaDeviceScanner.GetNumDevices(); ++iDevice) {
NTV2AudioBitsPerSampleList audioBitsPerSampleList =
ajaDeviceScanner.GetDeviceInfoList()[iDevice].audioBitsPerSampleList;
for (auto it = audioBitsPerSampleList.begin(); it != audioBitsPerSampleList.end(); it++) {
Audio::SamplingDepth sDepth = convertFormats(*it);
if (sDepth != Audio::SamplingDepth::SD_NONE) formats.push_back(sDepth);
}
}
return formats;
}
Audio::SamplingRate Ntv2Discovery::convertSamplerate(AudioSampleRateEnum ntv2SampleRate) {
switch (ntv2SampleRate) {
case k44p1KHzSampleRate:
return Audio::SamplingRate::SR_44100;
case k48KHzSampleRate:
return Audio::SamplingRate::SR_48000;
case k96KHzSampleRate:
return Audio::SamplingRate::SR_96000;
default:
return Audio::SamplingRate::SR_NONE;
}
}
Audio::SamplingDepth Ntv2Discovery::convertFormats(AudioBitsPerSampleEnum ntv2Format) {
switch (ntv2Format) {
case k16bitsPerSample:
return Audio::SamplingDepth::INT16;
case k24bitsPerSample:
return Audio::SamplingDepth::INT32; // 24bits audio PCM are stored in 32bits
break;
case k32bitsPerSample:
return Audio::SamplingDepth::INT32;
default:
return Audio::SamplingDepth::SD_NONE;
}
}
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#pragma once
#include "libvideostitch/plugin.hpp"
#include "ntv2Helper.hpp"
#include <vector>
#include <string>
#include <memory>
#include <windows.h>
#include "ajatypes.h"
#include "ajastuff/common/types.h"
#include "ntv2card.h"
#include "ntv2devicescanner.h"
#include "ntv2publicinterface.h"
namespace VideoStitch {
namespace Plugin {
class Ntv2Discovery : public VSDiscoveryPlugin {
struct Device {
Device() : boardIdx(0), channelIdx(0), autodetection(nullptr) {}
Plugin::DiscoveryDevice pluginDevice;
NTV2DeviceInfo boardInfo;
uint32_t boardIdx;
uint32_t channelIdx;
AutoDetection* autodetection;
};
struct InputDevice : public Device {
InputDevice() : Device() {}
};
struct OutputDevice : public Device {
OutputDevice() : Device() {}
};
public:
static Ntv2Discovery* create();
virtual ~Ntv2Discovery();
virtual std::string name() const override;
virtual std::string readableName() const override;
virtual std::vector<Plugin::DiscoveryDevice> inputDevices() override;
virtual std::vector<Plugin::DiscoveryDevice> outputDevices() override;
virtual std::vector<std::string> cards() const override;
virtual void registerAutoDetectionCallback(AutoDetection&) override;
virtual std::vector<DisplayMode> supportedDisplayModes(const Plugin::DiscoveryDevice&) override;
virtual std::vector<PixelFormat> supportedPixelFormat(const Plugin::DiscoveryDevice&) override;
virtual std::vector<int> supportedNbChannels(const Plugin::DiscoveryDevice& device) override;
virtual std::vector<Audio::SamplingRate> supportedSamplingRates(const Plugin::DiscoveryDevice& device) override;
virtual std::vector<Audio::SamplingDepth> supportedSampleFormats(const Plugin::DiscoveryDevice& device) override;
bool supportVideoMode(const Plugin::DiscoveryDevice&, const DisplayMode&, const PixelFormat&);
private:
Ntv2Discovery(const std::vector<std::string>& cards, const std::vector<std::shared_ptr<Device>>& devices);
DisplayMode currentDisplayMode(const Plugin::DiscoveryDevice& device);
static Audio::SamplingRate convertSamplerate(AudioSampleRateEnum ntv2SampleRate);
static Audio::SamplingDepth convertFormats(AudioBitsPerSampleEnum ntv2Format);
std::vector<std::string> m_cards;
std::vector<std::shared_ptr<Device>> m_devices;
};
} // namespace Plugin
} // namespace VideoStitch
This diff is collapsed.
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#pragma once
#include "libvideostitch/frame.hpp"
#include "libvideostitch/plugin.hpp"
#include <ajatypes.h>
#include <ajastuff/common/types.h>
#include <ntv2publicinterface.h>
#include <ntv2utils.h>
#include <ntv2card.h>
#include <ntv2rp188.h>
using namespace VideoStitch;
using namespace Plugin;
FrameRate aja2vsFrameRate(const NTV2FrameRate frameRate);
NTV2FrameBufferFormat vs2ajaPixelFormat(const PixelFormat pixelFmt);
NTV2VideoFormat vs2ajaDisplayFormat(const DisplayMode displayFmt);
PixelFormat aja2vsPixelFormat(const NTV2FrameBufferFormat pixelFmt);
DisplayMode aja2vsDisplayFormat(const NTV2VideoFormat displayFmt);
TimecodeFormat NTV2FrameRate2TimecodeFormat(const NTV2FrameRate inFrameRate);
ULWord GetRP188RegisterForInput(const NTV2InputSource inInputSource);
This diff is collapsed.
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#pragma once
#include "ajastuff/common/circularbuffer.h"
#include "ajastuff/system/thread.h"
#include "libvideostitch/inputFactory.hpp"
#include "libvideostitch/ptv.hpp"
#include "ntv2plugin.hpp"
#include <atomic>
namespace VideoStitch {
namespace Input {
class NTV2Reader : public VideoReader, public AudioReader {
public:
static NTV2Reader* create(readerid_t id, const Ptv::Value* config, const int64_t width, const int64_t height);
virtual ~NTV2Reader();
ReadStatus readSamples(size_t nbSamples, Audio::Samples& audioSamples) override;
ReadStatus readFrame(mtime_t&, unsigned char* video) override;
Status seekFrame(frameid_t) override;
Status seekFrame(mtime_t) override;
size_t available() override;
bool eos() override;
private:
NTV2Reader(readerid_t id, const int64_t width, const int64_t height, const UWord deviceIndex, const bool withAudio,
const NTV2Channel channel, FrameRate fps, bool interlaced);
// -- init
AJAStatus init();
void quit();
AJAStatus setupVideo(NTV2Channel);
AJAStatus setupAudio();
void setupHostBuffers();
void routeInputSignal(NTV2Channel);
AJAStatus run();
// -- capture
AJAThread* producerThread;
void startProducerThread();
void captureFrames();
static void producerThreadStatic(AJAThread*, void*);
bool InputSignalHasTimecode() const;
const uint32_t deviceIndex;
const bool withAudio;
const NTV2Channel inputChannel;
NTV2InputSource inputSource;
NTV2VideoFormat videoFormat;
NTV2FrameRate frameRate;
NTV2AudioSystem audioSystem;
CNTV2SignalRouter router;
int32_t startFrameId;
NTV2TCIndex timeCodeSource;
NTV2Device* device = nullptr;
DisplayMode displayMode;
FrameRate* frameRateVS = nullptr;
std::atomic<bool> noSignal = true;
bool interlaced;
bool AJAStop;
AUTOCIRCULATE_TRANSFER mInputTransfer; /*class use for autocircular, don't memset it to zero !*/
std::atomic<bool> globalQuit; /// Set "true" to gracefully stop
uint32_t videoBufferSize; /// in bytes
uint32_t audioBufferSize; /// in bytes
mtime_t videoTS;
AVDataBuffer aVHostBuffer[CIRCULAR_BUFFER_SIZE];
AJACircularBuffer<AVDataBuffer*> aVCircularBuffer;
typedef uint32_t ajasample_t;
std::vector<ajasample_t> audioBuff;
mtime_t audioTS;
uint64_t nbSamplesRead;
std::mutex audioBuffMutex;
std::mutex quitMutex;
};
} // namespace Input
} // namespace VideoStitch
This diff is collapsed.
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#pragma once
#include "ajastuff/common/circularbuffer.h"
#include "ajastuff/common/timecodeburn.h"
#include "libvideostitch/stitchOutput.hpp"
#include "libvideostitch/ptv.hpp"
#include "libvideostitch/circularBuffer.hpp"
#include "ntv2rp188.h"
#include "ntv2plugin.hpp"
#include <vector>
#include <mutex>
#include <atomic>
class AJAThread;
namespace VideoStitch {
namespace Output {
class NTV2Writer : public VideoWriter, public AudioWriter {
public:
static Output* create(const Ptv::Value& config, const std::string& name, const char* baseName, unsigned width,
unsigned height, FrameRate framerate);
~NTV2Writer();
virtual void pushVideo(const Frame& videoFrame) override;
virtual void pushAudio(Audio::Samples& audioSamples) override;
private:
NTV2Writer(const std::string& name, const UWord deviceIndex, const bool withAudio, const NTV2Channel channel,
const NTV2VideoFormat format, unsigned width, unsigned height, unsigned offset_x, unsigned offset_y,
FrameRate fps);
// -- init
AJAStatus _init();
AJAStatus run();
void quit();
AJAStatus setupVideo(NTV2Channel);
AJAStatus setupAudio();
void setupHostBuffers();
void setupOutputAutoCirculate();
static bool checkChannelConf(unsigned width, unsigned height, int chan);
void routeOutputSignal(NTV2Channel);
AJA_PixelFormat getAJAPixelFormat(NTV2FrameBufferFormat format);
bool outputDestHasRP188BypassEnabled(void);
void disableRP188Bypass(void);
/**
@brief Returns the RP188 DBB register number to use for the given NTV2OutputDestination.
@param[in] inOutputSource Specifies the NTV2OutputDestination of interest.
@return The number of the RP188 DBB register to use for the given output destination.
**/
static ULWord getRP188RegisterForOutput(const NTV2OutputDestination inOutputSource);
// -- player
void startConsumerThread();
void startProducerThread();
void playFrames();
void produceFrames();
static void consumerThreadStatic(AJAThread*, void*);
static void producerThreadStatic(AJAThread*, void*);
// Helper functions
/**
* @brief Initialize a table of tone per channel for 16 channels each channel has
* a specific frequency. Very useful to debug AJA output.
* Support 16 interleaved channels, int32_t sample format at 48 kHz
* Each tone is a multiple of 480 Hz
**/
void initSinTableFor16Channels();
/**
* @brief Fills the inout buffer with a tone per channel.
* Support 16 interleaved channels, int32_t sample format at 48 kHz
* Each tone is a multiple of 480 Hz
**/
uint32_t addAudioToneVS(int32_t* audioBuffer);
AJAThread* consumerThread;
AJAThread* producerThread;
const uint32_t deviceIndex;
uint8_t outputNb;
const bool withAudio;
const NTV2Channel outputChannel;
NTV2OutputDestination outputDestination;
NTV2VideoFormat videoFormat;
NTV2AudioSystem audioSystem; /// The audio system I'm using
uint32_t nbAJAChannels;
CNTV2SignalRouter router;
#ifdef DEPRECATED
AUTOCIRCULATE_TRANSFER_STRUCT outputTransferStruct; /// My A/C output transfer info
AUTOCIRCULATE_TRANSFER_STATUS_STRUCT outputTransferStatusStruct;
#endif
std::atomic<bool> globalQuit; /// Set "true" to gracefully stop
bool AJAStop;
uint32_t videoBufferSize; /// in bytes
uint32_t audioBufferSize; /// in bytes
uint32_t nbSamplesPerFrame;
AVDataBuffer aVHostBuffer[CIRCULAR_BUFFER_SIZE];
AJACircularBuffer<AVDataBuffer*> aVCircularBuffer;
CircularBuffer<uint8_t> videoBuffer;
CircularBuffer<int32_t> audioBuffer;
bool doLevelConversion; /// Demonstrates a level A to level B conversion
bool doMultiChannel; /// Demonstrates how to configure the board for multi-format
std::mutex frameMutex;
unsigned offset_x;
unsigned offset_y;
int32_t preRollFrames;
// Debug variables
uint32_t producedFrames;
uint32_t nbSamplesInWavForm;
std::vector<int32_t> sinTable16Channels;
ULWord currentSample;
};
} // namespace Output
} // namespace VideoStitch
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
//
// NTV2 framework includes ksmedia.hpp, which defines the speaker map with the same names...
#include "libvideostitch/audio.hpp"
#include "ntv2plugin.hpp"
#include "ntv2devicescanner.h"
#include "ajastuff/system/process.h"
namespace VideoStitch {
NTV2Device::NTV2Device() : deviceID(DEVICE_ID_NOTFOUND), initialized(false) {}
NTV2Device::~NTV2Device() {
if (initialized) {
device.ReleaseStreamForApplication(appSignature, static_cast<uint32_t>(AJAProcess::GetPid()));
device.SetEveryFrameServices(savedTaskMode);
}
}
NTV2Device* NTV2Device::getDevice(uint32_t deviceIndex) {
std::unique_lock<std::mutex> lk(registryMutex);
if (registry.find(deviceIndex) == registry.end()) {
NTV2Device* device = new NTV2Device();
if (AJA_FAILURE(device->init(deviceIndex))) {
delete device;
return nullptr;
}
registry[deviceIndex] = device;
}
return registry[deviceIndex];
}
AJAStatus NTV2Device::init(uint32_t deviceIndex) {
// Open the device
if (!CNTV2DeviceScanner::GetDeviceAtIndex(deviceIndex, device)) {
return AJA_STATUS_OPEN;
}
if (!device.AcquireStreamForApplication(appSignature, static_cast<uint32_t>(AJAProcess::GetPid()))) {
return AJA_STATUS_BUSY; // Another app is using the device
}
device.GetEveryFrameServices(&savedTaskMode);
device.SetEveryFrameServices(NTV2_OEM_TASKS);
deviceID = device.GetDeviceID();
return AJA_STATUS_SUCCESS;
}
} // namespace VideoStitch
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#pragma once
#include "ajatypes.h"
#include "ajastuff/common/types.h"
#include "ntv2card.h"
#include "ntv2publicinterface.h"
#include "ntv2rp188.h"
#include "libvideostitch/frame.hpp"
#include "ntv2Helper.hpp"
#include <map>
#include <mutex>
static const ULWord appSignature AJA_FOURCC('V', 'I', 'S', 'T');
namespace VideoStitch {
/**
This structure encapsulates the video and audio buffers.
The producer/consumer threads use a fixed number (CIRCULAR_BUFFER_SIZE) of these buffers.
The AJACircularBuffer template class greatly simplifies implementing this approach to efficiently
process frames.
**/
typedef struct {
uint32_t* videoBuffer; /// Pointer to host video buffer
uint32_t* videoBuffer2; /// Pointer to an additional host video buffer, usually field 2
uint32_t videoBufferSize; /// Size of host video buffer, in bytes
uint32_t inNumSegments; /// 1 for host video buffer transfer, number of lines for specialized data transfers
uint32_t inDeviceBytesPerLine; /// device pitch for specialized data transfers
uint32_t* audioBuffer; /// Pointer to host audio buffer
uint32_t audioBufferSize; /// Size of host audio buffer, in bytes
CRP188 rp188; /// Time and control code
uint32_t* ancBuffer;
uint32_t ancBufferSize;
uint32_t currentFrame; /// Frame Number
uint64_t audioTimeStamp; /// Audio TimeStamp
} AVDataBuffer;
const unsigned int CIRCULAR_BUFFER_SIZE(10); /// Specifies how many AVDataBuffers constitute the circular buffer
class NTV2Device {
public:
static NTV2Device* getDevice(uint32_t device);
virtual ~NTV2Device();
CNTV2Card device;
NTV2DeviceID deviceID;
private:
NTV2Device();
AJAStatus init(uint32_t deviceIndex);
NTV2EveryFrameTaskMode savedTaskMode; /// Used to restore prior every-frame task mode
bool initialized;
};
static std::mutex registryMutex;
static std::map<int, NTV2Device*> registry;
} // namespace VideoStitch
if(DISABLE_AV)
return()
endif(DISABLE_AV)
set(SOURCE_FILES
src/avWriter.cpp
src/baseAllocator.cpp
src/d3dAllocator.cpp
src/d3dDevice.cpp
src/export.cpp
src/libavReader.cpp
src/netStreamReader.cpp
src/timeoutUtil.cpp
src/videoReader.cpp)
set(HEADER_FILES
include/avWriter.hpp
include/libavReader.hpp
include/netStreamReader.hpp
include/videoReader.hpp)
function(setup_av_plugin PLUGIN_NAME BACKEND_NAME USE_CUDA)
# Set GPU core plugin output directories
if(WINDOWS)
set(VS_GPU_PLUGIN_DIR_NAME core_plugins_${BACKEND_NAME})
# Set plugin output dir for the generic single-config case (e.g. make, ninja)
set(VS_GPU_PLUGIN_DIR ${VS_OUT_DIR}/${CMAKE_BUILD_TYPE_LOW}/${VS_GPU_PLUGIN_DIR_NAME})
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${VS_GPU_PLUGIN_DIR})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${VS_GPU_PLUGIN_DIR})
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${VS_GPU_PLUGIN_DIR})
# Set plugin output dir for multi-config builds (e.g. MSVC, Xcode)
foreach(OUTPUTCONFIG ${CMAKE_CONFIGURATION_TYPES})
string(TOUPPER ${OUTPUTCONFIG} OUTPUTCONFIG_UP)
string(TOLOWER ${OUTPUTCONFIG} OUTPUTCONFIG_LOW)
set(VS_GPU_PLUGIN_DIR_${OUTPUTCONFIG_UP} ${VS_OUT_DIR}/${OUTPUTCONFIG_LOW}/${VS_GPU_PLUGIN_DIR_NAME})
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY_${OUTPUTCONFIG_UP} ${VS_GPU_PLUGIN_DIR_${OUTPUTCONFIG_UP}})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_${OUTPUTCONFIG_UP} ${VS_GPU_PLUGIN_DIR_${OUTPUTCONFIG_UP}})
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY_${OUTPUTCONFIG_UP} ${VS_GPU_PLUGIN_DIR_${OUTPUTCONFIG_UP}})
endforeach(OUTPUTCONFIG CMAKE_CONFIGURATION_TYPES)
endif(WINDOWS)
vs_add_IO_library(${PLUGIN_NAME} SHARED ${SOURCE_FILES} ${HEADER_FILES} $<TARGET_OBJECTS:common> $<TARGET_OBJECTS:format_cuda_${USE_CUDA}>)
include_lib_vs_headers(${PLUGIN_NAME})
include_discovery_vs_headers(${PLUGIN_NAME})
target_include_directories(${PLUGIN_NAME} PRIVATE include)
target_include_directories(${PLUGIN_NAME} PRIVATE ../common/include)
target_include_directories(${PLUGIN_NAME} PRIVATE ../common/format/include)
target_include_directories(${PLUGIN_NAME} PRIVATE ${CMAKE_EXTERNAL_DEPS}/include/Intel_Media_SDK)
if(USE_CUDA)
target_include_directories(${PLUGIN_NAME} PRIVATE ${CUDA_TOOLKIT_TARGET_DIR}/include)
target_compile_definitions(${PLUGIN_NAME} PRIVATE SUP_NVENC SUP_NVDEC)
find_package(CUDA REQUIRED)
target_link_libraries(${PLUGIN_NAME} PRIVATE ${CUDART})
endif()
set_property(TARGET ${PLUGIN_NAME} PROPERTY CXX_STANDARD 14)
set(FFMPEG_INCLUDE_PATH ${CMAKE_EXTERNAL_DEPS}/include/ffmpeg)
if(APPLE_MACPORTS)
set(FFMPEG_INCLUDE_PATH /opt/local/include)
endif()
target_include_directories(${PLUGIN_NAME} SYSTEM PRIVATE ${FFMPEG_INCLUDE_PATH})
if(LINUX OR APPLE OR ANDROID)
# VSA-5342: we're using functionality that has been deprecated in ffmpeg 3
target_compile_options(${PLUGIN_NAME} PRIVATE -Wno-deprecated-declarations)
endif()
target_link_libraries(${PLUGIN_NAME} PRIVATE ${FFMPEG_libraries_cuda_${USE_CUDA}})
if(WINDOWS)
string(TOUPPER ${BACKEND_NAME} BACKEND_NAME_UP)
target_link_libraries(${PLUGIN_NAME} PRIVATE ${VS_LIB_${BACKEND_NAME_UP}})
target_link_libraries(${PLUGIN_NAME} PRIVATE ${libmfxhw64} ${DirectX_LIB})
set_property(TARGET ${PLUGIN_NAME} APPEND_STRING PROPERTY LINK_FLAGS "/NODEFAULTLIB:libcmt /NODEFAULTLIB:libcmtd")
elseif(APPLE)
target_link_libraries(${PLUGIN_NAME} PRIVATE ${VS_LIB_FAKE})
else()
target_link_libraries(${PLUGIN_NAME} PRIVATE ${VS_LIB_DEFAULT})
endif()
# Unit tests
if(NOT WINDOWS)
add_executable(AvUtilTest test/utilTest.cpp src/timeoutUtil.cpp)
target_include_directories(AvUtilTest PRIVATE include)
target_include_directories(AvUtilTest PRIVATE ../common/format/include)
target_include_directories(AvUtilTest PRIVATE ${TESTING_INCLUDE})
target_include_directories(AvUtilTest SYSTEM PRIVATE ${FFMPEG_INCLUDE_PATH})
target_link_libraries(AvUtilTest PRIVATE ${VS_LIB_UNIT_TEST})
set_property(TARGET AvUtilTest PROPERTY CXX_STANDARD 14)
include_lib_vs_headers(AvUtilTest)
include_discovery_vs_headers(AvUtilTest)
add_test(NAME AvUtilTest COMMAND AvUtilTest)
endif()
endfunction()
if(WINDOWS)
if(GPU_BACKEND_CUDA)
setup_av_plugin("av_cuda" "cuda" "ON")
endif()
if(GPU_BACKEND_OPENCL)
setup_av_plugin("av_opencl" "opencl" "OFF")
endif()
else(WINDOWS)
setup_av_plugin("avPlugin" "" ${GPU_BACKEND_CUDA})
endif(WINDOWS)
# make I/O plugin list available to parent CMake project
set(VS_IO_LIBRARIES ${VS_IO_LIBRARIES} PARENT_SCOPE)
# AV plugin documentation
`av` is an IO plugin. It allows to capture video input/output from/to video files (like mp4 files)
and to capture input from RTSP streams.
## AV Input Configuration
The av plugin can be used by Vahana VR throught a .vah project file. Please see the `*.vah file format
specification` for additional details.
Define an input for each camera. The `reader_config` member specifies how to read
it.
### Example
For a video file input :
"inputs" : [
{
"width" : 2560,
"height" : 2048,
...
"reader_config" : "C:\\Users\\VideoStitch\\Vahana VR\\Projects\\test.mp4",
...
}]
For an RTSP stream input:
"inputs" : [
{
"width" : 2560,
"height" : 2048,
...
"reader_config" : "rtsp://10.0.0.203",
...
}]
The RTSP stream always has to begin with: "rtsp://" to be read by the `av` plugin.
## AV Output configuration
### Video configuration
For a video file output :
"output" :
{
"type" : "mp4",
"video_codec" : "h264",
"filename" : "C:\\Users\\VideoStitch\\Vahana VR\\Projects\\test.mp4",
"audio_codec" : "aac",
"sampling_rate" : 48000,
"sample_format" : "fltp",
"channel_layout" : "stereo",
"audio_bitrate" : 192
}
###### type
type : *string*
default : **required**
notes : muxer type : "mp4", "mov"
###### video_codec
type : *string*
default : **required**
notes : video codec : mjpeg, mpeg2, mpeg4, h264, prores, h264_nvenc
##### filename
type : *string*
default : **required**
notes : the output file name
###### bitrate
type : *int*
default : *15000000*
notes : target bitrate, in *bps*
###### profile
type : *string*
default : *main*
notes : H264 profile, baseline | main | high | constrained_high | high444 | stereo
Specified values will be ignored if the resolution & fps do not fit in the requested profile.
###### level
type : *automatic*
default : *automatic*
notes : Any of the H264 standard levels.
Specified values will be ignored if the resolution & fps do not fit in the requested level.
###### bitrate_mode
type : *string*
default : *VBR*
notes : bit rate control mode CBR, VBR (upper case).
###### gop
type : *int*
default : *250*
notes : The target GOP size. Unknown range (0~250 ?).
It is unknown wether scenecut detection can overide this or wether automatic GOP size is possible (eeg. w/ gop=0 as in libx264 encoder). All I-Frames are IDR-Frames.
###### b_frames
type : *int*
default : *0*
notes : number of B frames between two P frames
### Audio configuration
The audio has to be set also. The following table shows which parameters are supported according to the audio codec.
<table>
<tr><th>Audio codec</th><th>Sampling rate</th><th>Sample format</th><th>Channel layout</th><th>Audio bitrate</th></tr>
<tr><td>"aac"</td><td>44100, 48000</td><td>"fltp"</td><td>"mono", "stereo", "3.0", "4.0", "5.0", "5.1", "amb_wxyz"</td><td>64, 128, 192, 512</td></tr>
<tr><td>"mp3"</td><td>44100, 48000</td><td>"s16p"</td><td>"mono", "stereo"</td><td>64, 128, 192</td></tr>
</table>
###### channel_map
type : *array of int*
notes : optional setting to remap audio channels if needed. The array size has to match the number of channels of the channel layout.
For example: With a amb_wxyz layout and a channel_map = [0, 3, 1, 2], the default channel order is W, X, Y, Z. The resulting channel order will be W, Y, Z, X.
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#pragma once
#include "libvideostitch/stitchOutput.hpp"
#include "libvideostitch/frame.hpp"
#include "libvideostitch/profile.hpp"
#include "muxer.hpp"
#include <condition_variable>
#include <memory>
#include <mutex>
#include <queue>
#include <stdint.h>
#include <cstdio>
#ifndef _MSC_VER
#include <sys/time.h>
#endif
struct AVCodecContext;
struct AVFrame;
namespace VideoStitch {
namespace Util {
enum AvErrorCode : short;
}
namespace Output {
/**
* @brief Additional indirection onto the implementation.
* @note Allows reseting the writer implementation while keeping the same LibavWriter object
*/
class AvMuxer_pimpl;
class LibavWriter : public VideoWriter, public AudioWriter {
public:
static Output* create(const Ptv::Value& config, const std::string& name, const char* baseName, unsigned width,
unsigned height, FrameRate framerate, const Audio::SamplingRate samplingRate,
const Audio::SamplingDepth samplingDepth, const Audio::ChannelLayout channleLayout);
~LibavWriter();
void pushVideo(const Frame& videoFrame);
void pushAudio(Audio::Samples& audioSamples);
private:
LibavWriter(const Ptv::Value& config, const std::string& name, const VideoStitch::PixelFormat fmt, AddressSpace type,
unsigned width, unsigned height, FrameRate framerate, const Audio::SamplingRate samplingRate,
const Audio::SamplingDepth samplingDepth, const Audio::ChannelLayout channleLayout);
Util::AvErrorCode encodeVideoFrame(AVFrame* frame, int64_t frameOffset);
Util::AvErrorCode encodeAudioFrame(AVFrame* frame);
MuxerThreadStatus flushVideo();
MuxerThreadStatus flushAudio();
MuxerThreadStatus close();
bool needsRespawn(std::shared_ptr<AvMuxer_pimpl>&, mtime_t);
bool implReady(std::shared_ptr<AvMuxer_pimpl>&, AVCodecContext*, mtime_t);
bool hasAudio() const { return audioCodecContext != nullptr; }
bool createVideoCodec(AddressSpace type, unsigned width, unsigned height, FrameRate framerate);
bool createAudioCodec();
bool resetCodec(AVCodecContext*, MuxerThreadStatus& status);
Ptv::Value* m_config;
std::deque<AVFrame*> videoFrames;
AVDictionary* codecConfig;
AVCodecContext* videoCodecContext;
mtime_t firstVideoPTS;
AVFrame* audioFrame;
Audio::Samples audioBuffer;
uint8_t* audioData[MAX_AUDIO_CHANNELS]; // intermediate buffer
const uint8_t* avSamples; // buffer used by libav
Audio::SamplingFormat m_sampleFormat;
std::vector<int64_t> m_channelMap;
std::size_t m_audioFrameSizeInBytes;
AVCodecContext* audioCodecContext;
int m_currentImplNumber;
std::shared_ptr<AvMuxer_pimpl> m_pimplVideo;
std::shared_ptr<AvMuxer_pimpl> m_pimplAudio;
std::mutex pimplMu;
};
} // namespace Output
} // namespace VideoStitch
/* ****************************************************************************** *\
INTEL CORPORATION PROPRIETARY INFORMATION
This software is supplied under the terms of a license agreement or nondisclosure
agreement with Intel Corporation and may not be copied or disclosed except in
accordance with the terms of that agreement
Copyright(c) 2008-2013 Intel Corporation. All Rights Reserved.
\* ****************************************************************************** */
#pragma once
#if defined(WIN32) || defined(WIN64)
#ifndef D3D_SURFACES_SUPPORT
#define D3D_SURFACES_SUPPORT 1
#endif
#if defined(_WIN32) && !defined(MFX_D3D11_SUPPORT)
#include <sdkddkver.h>
#if (NTDDI_VERSION >= NTDDI_VERSION_FROM_WIN32_WINNT2(0x0602)) // >= _WIN32_WINNT_WIN8
#define MFX_D3D11_SUPPORT 1 // Enable D3D11 support if SDK allows
#else
#define MFX_D3D11_SUPPORT 0
#endif
#endif // #if defined(WIN32) && !defined(MFX_D3D11_SUPPORT)
#endif // #if defined(WIN32) || defined(WIN64)
#include <list>
#include <string.h>
#include <functional>
#include "mfx/mfxvideo.h"
struct mfxAllocatorParams {
virtual ~mfxAllocatorParams(){};
};
// this class implements methods declared in mfxFrameAllocator structure
// simply redirecting them to virtual methods which should be overridden in derived classes
class MFXFrameAllocator : public mfxFrameAllocator {
public:
MFXFrameAllocator();
virtual ~MFXFrameAllocator();
// optional method, override if need to pass some parameters to allocator from application
virtual mfxStatus Init(mfxAllocatorParams *pParams) = 0;
virtual mfxStatus Close() = 0;
virtual mfxStatus AllocFrames(mfxFrameAllocRequest *request, mfxFrameAllocResponse *response) = 0;
virtual mfxStatus LockFrame(mfxMemId mid, mfxFrameData *ptr) = 0;
virtual mfxStatus UnlockFrame(mfxMemId mid, mfxFrameData *ptr) = 0;
virtual mfxStatus GetFrameHDL(mfxMemId mid, mfxHDL *handle) = 0;
virtual mfxStatus FreeFrames(mfxFrameAllocResponse *response) = 0;
private:
static mfxStatus MFX_CDECL Alloc_(mfxHDL pthis, mfxFrameAllocRequest *request, mfxFrameAllocResponse *response);
static mfxStatus MFX_CDECL Lock_(mfxHDL pthis, mfxMemId mid, mfxFrameData *ptr);
static mfxStatus MFX_CDECL Unlock_(mfxHDL pthis, mfxMemId mid, mfxFrameData *ptr);
static mfxStatus MFX_CDECL GetHDL_(mfxHDL pthis, mfxMemId mid, mfxHDL *handle);
static mfxStatus MFX_CDECL Free_(mfxHDL pthis, mfxFrameAllocResponse *response);
};
// This class implements basic logic of memory allocator
// Manages responses for different components according to allocation request type
// External frames of a particular component-related type are allocated in one call
// Further calls return previously allocated response.
// Ex. Preallocated frame chain with type=FROM_ENCODE | FROM_VPPIN will be returned when
// request type contains either FROM_ENCODE or FROM_VPPIN
// This class does not allocate any actual memory
class BaseFrameAllocator : public MFXFrameAllocator {
public:
BaseFrameAllocator();
virtual ~BaseFrameAllocator();
virtual mfxStatus Init(mfxAllocatorParams *pParams) = 0;
virtual mfxStatus Close();
virtual mfxStatus AllocFrames(mfxFrameAllocRequest *request, mfxFrameAllocResponse *response);
virtual mfxStatus FreeFrames(mfxFrameAllocResponse *response);
protected:
typedef std::list<mfxFrameAllocResponse>::iterator Iter;
static const mfxU32 MEMTYPE_FROM_MASK =
MFX_MEMTYPE_FROM_ENCODE | MFX_MEMTYPE_FROM_DECODE | MFX_MEMTYPE_FROM_VPPIN | MFX_MEMTYPE_FROM_VPPOUT;
struct UniqueResponse : mfxFrameAllocResponse {
mfxU16 m_cropw;
mfxU16 m_croph;
mfxU32 m_refCount;
mfxU16 m_type;
UniqueResponse() : m_cropw(0), m_croph(0), m_refCount(0), m_type(0) {
memset(static_cast<mfxFrameAllocResponse *>(this), 0, sizeof(mfxFrameAllocResponse));
}
// compare responses by actual frame size, alignment (w and h) is up to application
UniqueResponse(const mfxFrameAllocResponse &response, mfxU16 cropw, mfxU16 croph, mfxU16 type)
: mfxFrameAllocResponse(response), m_cropw(cropw), m_croph(croph), m_refCount(1), m_type(type) {}
// compare by resolution
bool operator()(const UniqueResponse &response) const {
return m_cropw == response.m_cropw && m_croph == response.m_croph;
}
};
std::list<mfxFrameAllocResponse> m_responses;
std::list<UniqueResponse> m_ExtResponses;
struct IsSame : public std::binary_function<mfxFrameAllocResponse, mfxFrameAllocResponse, bool> {
bool operator()(const mfxFrameAllocResponse &l, const mfxFrameAllocResponse &r) const {
return r.mids != 0 && l.mids != 0 && r.mids[0] == l.mids[0] && r.NumFrameActual == l.NumFrameActual;
}
};
// checks if request is supported
virtual mfxStatus CheckRequestType(mfxFrameAllocRequest *request);
// frees memory attached to response
virtual mfxStatus ReleaseResponse(mfxFrameAllocResponse *response) = 0;
// allocates memory
virtual mfxStatus AllocImpl(mfxFrameAllocRequest *request, mfxFrameAllocResponse *response) = 0;
template <class T>
class safe_array {
public:
safe_array(T *ptr = 0)
: m_ptr(ptr){
// construct from object pointer
};
~safe_array() { reset(0); }
T *get() { // return wrapped pointer
return m_ptr;
}
T *release() { // return wrapped pointer and give up ownership
T *ptr = m_ptr;
m_ptr = 0;
return ptr;
}
void reset(T *ptr) { // destroy designated object and store new pointer
if (m_ptr) {
delete[] m_ptr;
}
m_ptr = ptr;
}
protected:
T *m_ptr; // the wrapped object pointer
private:
safe_array(const safe_array &);
safe_array &operator=(const safe_array &);
};
};
class MFXBufferAllocator : public mfxBufferAllocator {
public:
MFXBufferAllocator();
virtual ~MFXBufferAllocator();
virtual mfxStatus AllocBuffer(mfxU32 nbytes, mfxU16 type, mfxMemId *mid) = 0;
virtual mfxStatus LockBuffer(mfxMemId mid, mfxU8 **ptr) = 0;
virtual mfxStatus UnlockBuffer(mfxMemId mid) = 0;
virtual mfxStatus FreeBuffer(mfxMemId mid) = 0;
private:
static mfxStatus MFX_CDECL Alloc_(mfxHDL pthis, mfxU32 nbytes, mfxU16 type, mfxMemId *mid);
static mfxStatus MFX_CDECL Lock_(mfxHDL pthis, mfxMemId mid, mfxU8 **ptr);
static mfxStatus MFX_CDECL Unlock_(mfxHDL pthis, mfxMemId mid);
static mfxStatus MFX_CDECL Free_(mfxHDL pthis, mfxMemId mid);
};
#pragma once
#if defined(_WIN32) || defined(_WIN64)
#include "baseAllocator.hpp"
#include <atlbase.h>
#include <d3d9.h>
#include <dxva2api.h>
enum eTypeHandle { DXVA2_PROCESSOR = 0x00, DXVA2_DECODER = 0x01 };
struct D3DAllocatorParams : mfxAllocatorParams {
IDirect3DDeviceManager9 *pManager;
DWORD surfaceUsage;
D3DAllocatorParams() : pManager(), surfaceUsage() {}
};
class D3DFrameAllocator : public BaseFrameAllocator {
public:
D3DFrameAllocator();
virtual ~D3DFrameAllocator();
virtual mfxStatus Init(mfxAllocatorParams *pParams);
virtual mfxStatus Close();
virtual IDirect3DDeviceManager9 *GetDeviceManager() { return m_manager; };
virtual mfxStatus LockFrame(mfxMemId mid, mfxFrameData *ptr);
virtual mfxStatus UnlockFrame(mfxMemId mid, mfxFrameData *ptr);
virtual mfxStatus GetFrameHDL(mfxMemId mid, mfxHDL *handle);
protected:
virtual mfxStatus CheckRequestType(mfxFrameAllocRequest *request);
virtual mfxStatus ReleaseResponse(mfxFrameAllocResponse *response);
virtual mfxStatus AllocImpl(mfxFrameAllocRequest *request, mfxFrameAllocResponse *response);
CComPtr<IDirect3DDeviceManager9> m_manager;
CComPtr<IDirectXVideoDecoderService> m_decoderService;
CComPtr<IDirectXVideoProcessorService> m_processorService;
HANDLE m_hDecoder;
HANDLE m_hProcessor;
DWORD m_surfaceUsage;
};
#endif // #if defined( _WIN32 ) || defined ( _WIN64 )
/*********************************************************************************
INTEL CORPORATION PROPRIETARY INFORMATION
This software is supplied under the terms of a license agreement or nondisclosure
agreement with Intel Corporation and may not be copied or disclosed except in
accordance with the terms of that agreement
Copyright(c) 2011-2014 Intel Corporation. All Rights Reserved.
**********************************************************************************/
#pragma once
#if defined(_WIN32) || defined(_WIN64)
#include "hwDevice.hpp"
#pragma warning(disable : 4201)
#include <d3d9.h>
#include <dxva2api.h>
#include <dxva.h>
#include <windows.h>
#define VIDEO_MAIN_FORMAT D3DFMT_YUY2
class IGFXS3DControl;
/** Direct3D 9 device implementation.
@note Can be initilized for only 1 or two 2 views. Handle to
MFX_HANDLE_GFXS3DCONTROL must be set prior if initializing for 2 views.
@note Device always set D3DPRESENT_PARAMETERS::Windowed to TRUE.
*/
class CD3D9Device : public CHWDevice {
public:
CD3D9Device();
virtual ~CD3D9Device();
virtual mfxStatus Init(mfxHDL hWindow, mfxU16 nViews, mfxU32 nAdapterNum);
virtual mfxStatus Reset();
virtual mfxStatus GetHandle(mfxHandleType type, mfxHDL* pHdl);
virtual mfxStatus SetHandle(mfxHandleType type, mfxHDL hdl);
virtual mfxStatus RenderFrame(mfxFrameSurface1* pSurface, mfxFrameAllocator* pmfxAlloc);
virtual void UpdateTitle(double /*fps*/) {}
virtual void Close();
void DefineFormat(bool isA2rgb10) { m_bIsA2rgb10 = (isA2rgb10) ? TRUE : FALSE; }
protected:
mfxStatus CreateVideoProcessors();
bool CheckOverlaySupport();
virtual mfxStatus FillD3DPP(mfxHDL hWindow, mfxU16 nViews, D3DPRESENT_PARAMETERS& D3DPP);
private:
IDirect3D9Ex* m_pD3D9;
IDirect3DDevice9Ex* m_pD3DD9;
IDirect3DDeviceManager9* m_pDeviceManager9;
D3DPRESENT_PARAMETERS m_D3DPP;
UINT m_resetToken;
mfxU16 m_nViews;
IGFXS3DControl* m_pS3DControl;
D3DSURFACE_DESC m_backBufferDesc;
// service required to create video processors
IDirectXVideoProcessorService* m_pDXVAVPS;
// left channel processor
IDirectXVideoProcessor* m_pDXVAVP_Left;
// right channel processor
IDirectXVideoProcessor* m_pDXVAVP_Right;
// target rectangle
RECT m_targetRect;
// various structures for DXVA2 calls
DXVA2_VideoDesc m_VideoDesc;
DXVA2_VideoProcessBltParams m_BltParams;
DXVA2_VideoSample m_Sample;
BOOL m_bIsA2rgb10;
};
#endif // #if defined( _WIN32 ) || defined ( _WIN64 )
/* ****************************************************************************** *\
INTEL CORPORATION PROPRIETARY INFORMATION
This software is supplied under the terms of a license agreement or nondisclosure
agreement with Intel Corporation and may not be copied or disclosed except in
accordance with the terms of that agreement
Copyright(c) 2013 Intel Corporation. All Rights Reserved.
\* ****************************************************************************** */
#pragma once
#include "mfx/mfxvideo.h"
#if defined(WIN32) || defined(WIN64)
#ifndef D3D_SURFACES_SUPPORT
#define D3D_SURFACES_SUPPORT 1
#endif
#if defined(_WIN32) && !defined(MFX_D3D11_SUPPORT)
#include <sdkddkver.h>
#if (NTDDI_VERSION >= NTDDI_VERSION_FROM_WIN32_WINNT2(0x0602)) // >= _WIN32_WINNT_WIN8
#define MFX_D3D11_SUPPORT 1 // Enable D3D11 support if SDK allows
#else
#define MFX_D3D11_SUPPORT 0
#endif
#endif // #if defined(WIN32) && !defined(MFX_D3D11_SUPPORT)
#endif // #if defined(WIN32) || defined(WIN64)
#define MSDK_ZERO_MEMORY(VAR) \
{ memset(&VAR, 0, sizeof(VAR)); }
#define MSDK_MEMCPY_VAR(dstVarName, src, count) memcpy_s(&(dstVarName), sizeof(dstVarName), (src), (count))
#define MSDK_SAFE_RELEASE(X) \
{ \
if (X) { \
X->Release(); \
X = NULL; \
} \
}
#define MSDK_CHECK_RESULT(P, X, ERR) \
{ \
if ((X) > (P)) { \
MSDK_PRINT_RET_MSG(ERR); \
return ERR; \
} \
}
#define MSDK_CHECK_POINTER(P, ...) \
{ \
if (!(P)) { \
return __VA_ARGS__; \
} \
}
#define MSDK_ARRAY_LEN(value) (sizeof(value) / sizeof(value[0]))
enum {
MFX_HANDLE_GFXS3DCONTROL = 0x100, /* A handle to the IGFXS3DControl instance */
MFX_HANDLE_DEVICEWINDOW = 0x101 /* A handle to the render window */
}; // mfxHandleType
/// Base class for hw device
class CHWDevice {
public:
virtual ~CHWDevice() {}
/** Initializes device for requested processing.
@param[in] hWindow Window handle to bundle device to.
@param[in] nViews Number of views to process.
@param[in] nAdapterNum Number of adapter to use
*/
virtual mfxStatus Init(mfxHDL hWindow, mfxU16 nViews, mfxU32 nAdapterNum) = 0;
/// Reset device.
virtual mfxStatus Reset() = 0;
/// Get handle can be used for MFX session SetHandle calls
virtual mfxStatus GetHandle(mfxHandleType type, mfxHDL *pHdl) = 0;
/** Set handle.
Particular device implementation may require other objects to operate.
*/
virtual mfxStatus SetHandle(mfxHandleType type, mfxHDL hdl) = 0;
virtual mfxStatus RenderFrame(mfxFrameSurface1 *pSurface, mfxFrameAllocator *pmfxAlloc) = 0;
virtual void UpdateTitle(double fps) = 0;
virtual void Close() = 0;
};
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#pragma once
#include "libvideostitch/input.hpp"
#include "libvideostitch/inputFactory.hpp"
#include <deque>
#include <chrono>
extern "C" {
#include <libavutil/pixfmt.h>
#ifdef SUP_QUICKSYNC
#include <libavcodec/qsv.h>
#endif
}
#undef PixelFormat
struct AVFormatContext;
struct AVCodecContext;
struct AVCodec;
struct AVFrame;
struct AVPacket;
class CHWDevice;
class D3DFrameAllocator;
static const int INVALID_STREAM_ID(-1);
namespace VideoStitch {
namespace Util {
class TimeoutHandler;
enum AvErrorCode : short;
} // namespace Util
namespace Input {
#ifdef QUICKSYNC
class QSVContext {
public:
bool initMFX();
static int getQSVBuffer(AVCodecContext* avctx, AVFrame* frame, int flags);
static void freeQSVBuffer(void* opaque, uint8_t* data);
mfxSession session;
CHWDevice* hwdev;
D3DFrameAllocator* allocator;
mfxMemId* surface_ids;
int* surface_used;
int nb_surfaces;
mfxFrameInfo frame_info;
};
#endif
/**
* A deleter that does nothing
*/
template <class T>
struct NoopDeleter {
NoopDeleter() {}
NoopDeleter(const NoopDeleter<T>& /*other*/) {}
void operator()(T*) const {}
};
/**
* libav image reader.
*/
class LibavReader : public VideoReader, public AudioReader {
public:
// TODOLATERSTATUS replace by Input::ReadStatus
enum class LibavReadStatus { Ok, EndOfPackets, Error };
static ProbeResult probe(const std::string& fileNameTemplate);
// ~ is protected, can't use Potential's DefaultDeleter
typedef Potential<LibavReader, NoopDeleter<LibavReader>> PotentialLibavReader;
static PotentialLibavReader create(const std::string& fileNameTemplate,
VideoStitch::Plugin::VSReaderPlugin::Config runtime);
virtual ReadStatus readSamples(size_t nbSamples, Audio::Samples& audioSamples) override;
virtual Status seekFrame(frameid_t) override;
Status seekFrame(mtime_t) override;
virtual size_t available() override;
bool eos() override;
protected:
LibavReader(const std::string& displayName, const int64_t width, const int64_t height, const frameid_t firstFrame,
const AVPixelFormat fmt, AddressSpace addrSpace, struct AVFormatContext* formatCtx,
#ifdef QUICKSYNC
class QSVContext* qsvCtx,
#endif
struct AVCodecContext* videoDecoderCtx, struct AVCodecContext* audioDecoderCtx,
struct AVCodec* videoCodec, struct AVCodec* audioCodec, struct AVFrame* video, struct AVFrame* audio,
Util::TimeoutHandler* interruptCallback, const int videoIdx, const int audioIdx,
const Audio::ChannelLayout layout, const Audio::SamplingRate samplingRate,
const Audio::SamplingDepth samplingDepth);
~LibavReader();
LibavReadStatus readPacket(AVPacket* pkt);
static void findAvStreams(struct AVFormatContext* formatCtx, int& videoIdx, int& audioIdx);
static enum AVPixelFormat selectFormat(struct AVCodecContext*, const enum AVPixelFormat*);
void decodeVideoPacket(bool* got_picture, AVPacket* pkt, unsigned char* frame, bool flush = false);
void flushVideoDecoder(bool* got_picture, unsigned char* frame);
void decodeAudioPacket(AVPacket* pkt, bool flush = false);
struct AVFormatContext* formatCtx;
#ifdef QUICKSYNC
QSVContext* qsvCtx;
#endif
struct AVCodecContext* videoDecoderCtx;
struct AVCodecContext* audioDecoderCtx;
const struct AVCodec* videoCodec;
const struct AVCodec* audioCodec;
struct AVFrame* videoFrame;
struct AVFrame* audioFrame;
Util::TimeoutHandler* interruptCallback;
const int videoIdx;
const int audioIdx;
// time code of the last decoded video frame,
// expressed in time_base units (eg. 1/90000 second),
// from the start of the container (see start_time semantics)
int64_t currentVideoPts;
// time code of the first video frame, in container clock
// in libvideostitch loadFrame date, this is 0
int64_t firstVideoFramePts;
std::vector<std::deque<uint8_t>> audioBuffer;
size_t nbSamplesInAudioBuffer;
mtime_t videoTimeStamp;
mtime_t audioTimeStamp;
bool expectingIncreasingVideoPts;
private:
static Util::AvErrorCode avDecodePacket(AVCodecContext* s, AVPacket* pkt, AVFrame* frame, bool* got_frame,
bool flush = false);
static int getBuffer(AVCodecContext* s, AVFrame* pic);
static void releaseBuffer(AVCodecContext* /*s*/, AVFrame* pic);
};
} // namespace Input
} // namespace VideoStitch
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#pragma once
#include "libvideostitch/input.hpp"
#include "libavReader.hpp"
#include <atomic>
#include <future>
#include <condition_variable>
#include <mutex>
#include <queue>
struct AVFormatContext;
struct AVCodec;
struct AVFrame;
struct AVPacket;
namespace VideoStitch {
namespace Input {
/* Network Streaming Client for Vahana Input Plugin */
class netStreamReader : public LibavReader {
public:
// TODOLATERSTATUS replace by Input::ReadStatus
enum NetStreamReadStatus { Ok, Error, Continue, EOS };
static bool handles(const std::string& filename);
netStreamReader(readerid_t id, const std::string& displayName, const int64_t width, const int64_t height,
const int firstFrame, const AVPixelFormat fmt, AddressSpace addrSpace,
struct AVFormatContext* formatCtx,
#ifdef QUICKSYNC
class QSVContext* qsvCtx,
#endif
struct AVCodecContext* videoDecoderCtx, struct AVCodecContext* audioDecoderCtx,
struct AVCodec* videoCodec, struct AVCodec* audioCodec, struct AVFrame* vFRame,
struct AVFrame* audioFrame, Util::TimeoutHandler* interruptCallback, const signed videoIdx,
const signed audioIdx, const Audio::ChannelLayout layout, const Audio::SamplingRate samplingRate,
const Audio::SamplingDepth samplingDepth);
virtual ~netStreamReader();
ReadStatus readFrame(mtime_t& date, unsigned char* video) override;
ReadStatus readSamples(size_t nbSamples, Audio::Samples& audioSamples) override;
private:
void readNetPackets();
void decodeVideo();
void decodeAudio();
std::thread handlePackets;
std::thread handleVideo;
std::thread handleAudio;
std::mutex videoQueueMutex, audioQueueMutex;
std::condition_variable cvDecodeVideo, cvDecodeAudio;
std::queue<AVPacket*> videoPacketQueue, audioPacketQueue;
std::atomic<bool> stoppingQueues;
std::mutex videoFrameMutex;
std::mutex audioBufferMutex;
std::condition_variable cvNewFrame;
std::condition_variable cvFrameConsumed;
std::vector<unsigned char> frame;
std::atomic<bool> frameAvailable;
bool stoppingFrames;
};
} // namespace Input
} // namespace VideoStitch
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#pragma once
#include "libvideostitch/input.hpp"
#include "libavReader.hpp"
extern "C" {
#include <libavformat/avformat.h>
}
#include <mutex>
struct AVFormatContext;
struct AVCodec;
struct AVFrame;
namespace VideoStitch {
namespace Input {
/**
* libav image reader.
*/
class FFmpegReader : public LibavReader {
public:
enum FFmpegReadStatus { Ok, Error, Continue, EOS };
static bool handles(const std::string& filename);
Status seekFrame(frameid_t) override;
ReadStatus readFrame(mtime_t& date, unsigned char* videoFrame) override;
ReadStatus readSamples(size_t nbSamples, Audio::Samples& audioSamples) override;
size_t available() override;
virtual ~FFmpegReader();
FFmpegReader(readerid_t id, const std::string& displayName, const int64_t width, const int64_t height,
const int firstFrame, const AVPixelFormat fmt, AddressSpace addrSpace, struct AVFormatContext* formatCtx,
struct AVCodecContext* videoDecoderCtx, struct AVCodecContext* audioDecoderCtx,
struct AVCodec* videoCodec, struct AVCodec* audioCodec, struct AVFrame* videoFrame,
struct AVFrame* audioFrame, Util::TimeoutHandler* interruptCallback, const int videoIdx,
const int audioIdx, const Audio::ChannelLayout layout, const Audio::SamplingRate samplingRate,
const Audio::SamplingDepth samplingDepth);
private:
bool ensureAudio(size_t nbSamples);
std::vector<unsigned char> frame;
std::recursive_mutex monitor;
std::deque<AVPacket> videoQueue, audioQueue; // for audio preroll
};
} // namespace Input
} // namespace VideoStitch
This diff is collapsed.
/* ****************************************************************************** *\
INTEL CORPORATION PROPRIETARY INFORMATION
This software is supplied under the terms of a license agreement or nondisclosure
agreement with Intel Corporation and may not be copied or disclosed except in
accordance with the terms of that agreement
Copyright(c) 2008-2012 Intel Corporation. All Rights Reserved.
\* ****************************************************************************** */
#ifdef QUICKSYNC
#include <assert.h>
#include <algorithm>
#include "baseAllocator.hpp"
MFXFrameAllocator::MFXFrameAllocator() {
pthis = this;
Alloc = Alloc_;
Lock = Lock_;
Free = Free_;
Unlock = Unlock_;
GetHDL = GetHDL_;
}
MFXFrameAllocator::~MFXFrameAllocator() {}
mfxStatus MFXFrameAllocator::Alloc_(mfxHDL pthis, mfxFrameAllocRequest *request, mfxFrameAllocResponse *response) {
if (0 == pthis) return MFX_ERR_MEMORY_ALLOC;
MFXFrameAllocator &self = *(MFXFrameAllocator *)pthis;
return self.AllocFrames(request, response);
}
mfxStatus MFXFrameAllocator::Lock_(mfxHDL pthis, mfxMemId mid, mfxFrameData *ptr) {
if (0 == pthis) return MFX_ERR_MEMORY_ALLOC;
MFXFrameAllocator &self = *(MFXFrameAllocator *)pthis;
return self.LockFrame(mid, ptr);
}
mfxStatus MFXFrameAllocator::Unlock_(mfxHDL pthis, mfxMemId mid, mfxFrameData *ptr) {
if (0 == pthis) return MFX_ERR_MEMORY_ALLOC;
MFXFrameAllocator &self = *(MFXFrameAllocator *)pthis;
return self.UnlockFrame(mid, ptr);
}
mfxStatus MFXFrameAllocator::Free_(mfxHDL pthis, mfxFrameAllocResponse *response) {
if (0 == pthis) return MFX_ERR_MEMORY_ALLOC;
MFXFrameAllocator &self = *(MFXFrameAllocator *)pthis;
return self.FreeFrames(response);
}
mfxStatus MFXFrameAllocator::GetHDL_(mfxHDL pthis, mfxMemId mid, mfxHDL *handle) {
if (0 == pthis) return MFX_ERR_MEMORY_ALLOC;
MFXFrameAllocator &self = *(MFXFrameAllocator *)pthis;
return self.GetFrameHDL(mid, handle);
}
BaseFrameAllocator::BaseFrameAllocator() {}
BaseFrameAllocator::~BaseFrameAllocator() {}
mfxStatus BaseFrameAllocator::CheckRequestType(mfxFrameAllocRequest *request) {
if (0 == request) return MFX_ERR_NULL_PTR;
// check that Media SDK component is specified in request
if ((request->Type & MEMTYPE_FROM_MASK) != 0)
return MFX_ERR_NONE;
else
return MFX_ERR_UNSUPPORTED;
}
mfxStatus BaseFrameAllocator::AllocFrames(mfxFrameAllocRequest *request, mfxFrameAllocResponse *response) {
if (0 == request || 0 == response || 0 == request->NumFrameSuggested) return MFX_ERR_MEMORY_ALLOC;
if (MFX_ERR_NONE != CheckRequestType(request)) return MFX_ERR_UNSUPPORTED;
mfxStatus sts = MFX_ERR_NONE;
if ((request->Type & MFX_MEMTYPE_EXTERNAL_FRAME) && (request->Type & MFX_MEMTYPE_FROM_DECODE)) {
// external decoder allocations
std::list<UniqueResponse>::iterator it =
std::find_if(m_ExtResponses.begin(), m_ExtResponses.end(),
UniqueResponse(*response, request->Info.CropW, request->Info.CropH, 0));
if (it != m_ExtResponses.end()) {
// check if enough frames were allocated
if (request->NumFrameSuggested > it->NumFrameActual) return MFX_ERR_MEMORY_ALLOC;
it->m_refCount++;
// return existing response
*response = (mfxFrameAllocResponse &)*it;
} else {
sts = AllocImpl(request, response);
if (sts == MFX_ERR_NONE) {
m_ExtResponses.push_back(
UniqueResponse(*response, request->Info.CropW, request->Info.CropH, request->Type & MEMTYPE_FROM_MASK));
}
}
} else {
// internal allocations
// reserve space before allocation to avoid memory leak
m_responses.push_back(mfxFrameAllocResponse());
sts = AllocImpl(request, response);
if (sts == MFX_ERR_NONE) {
m_responses.back() = *response;
} else {
m_responses.pop_back();
}
}
return sts;
}
mfxStatus BaseFrameAllocator::FreeFrames(mfxFrameAllocResponse *response) {
if (response == 0) return MFX_ERR_INVALID_HANDLE;
mfxStatus sts = MFX_ERR_NONE;
// check whether response is an external decoder response
std::list<UniqueResponse>::iterator i =
std::find_if(m_ExtResponses.begin(), m_ExtResponses.end(), std::bind1st(IsSame(), *response));
if (i != m_ExtResponses.end()) {
if ((--i->m_refCount) == 0) {
sts = ReleaseResponse(response);
m_ExtResponses.erase(i);
}
return sts;
}
// if not found so far, then search in internal responses
std::list<mfxFrameAllocResponse>::iterator i2 =
std::find_if(m_responses.begin(), m_responses.end(), std::bind1st(IsSame(), *response));
if (i2 != m_responses.end()) {
sts = ReleaseResponse(response);
m_responses.erase(i2);
return sts;
}
// not found anywhere, report an error
return MFX_ERR_INVALID_HANDLE;
}
mfxStatus BaseFrameAllocator::Close() {
std::list<UniqueResponse>::iterator i;
for (i = m_ExtResponses.begin(); i != m_ExtResponses.end(); ++i) {
ReleaseResponse(&*i);
}
m_ExtResponses.clear();
std::list<mfxFrameAllocResponse>::iterator i2;
for (i2 = m_responses.begin(); i2 != m_responses.end(); ++i2) {
ReleaseResponse(&*i2);
}
return MFX_ERR_NONE;
}
MFXBufferAllocator::MFXBufferAllocator() {
pthis = this;
Alloc = Alloc_;
Lock = Lock_;
Free = Free_;
Unlock = Unlock_;
}
MFXBufferAllocator::~MFXBufferAllocator() {}
mfxStatus MFXBufferAllocator::Alloc_(mfxHDL pthis, mfxU32 nbytes, mfxU16 type, mfxMemId *mid) {
if (0 == pthis) return MFX_ERR_MEMORY_ALLOC;
MFXBufferAllocator &self = *(MFXBufferAllocator *)pthis;
return self.AllocBuffer(nbytes, type, mid);
}
mfxStatus MFXBufferAllocator::Lock_(mfxHDL pthis, mfxMemId mid, mfxU8 **ptr) {
if (0 == pthis) return MFX_ERR_MEMORY_ALLOC;
MFXBufferAllocator &self = *(MFXBufferAllocator *)pthis;
return self.LockBuffer(mid, ptr);
}
mfxStatus MFXBufferAllocator::Unlock_(mfxHDL pthis, mfxMemId mid) {
if (0 == pthis) return MFX_ERR_MEMORY_ALLOC;
MFXBufferAllocator &self = *(MFXBufferAllocator *)pthis;
return self.UnlockBuffer(mid);
}
mfxStatus MFXBufferAllocator::Free_(mfxHDL pthis, mfxMemId mid) {
if (0 == pthis) return MFX_ERR_MEMORY_ALLOC;
MFXBufferAllocator &self = *(MFXBufferAllocator *)pthis;
return self.FreeBuffer(mid);
}
#endif
\ No newline at end of file
/*********************************************************************************
INTEL CORPORATION PROPRIETARY INFORMATION
This software is supplied under the terms of a license agreement or nondisclosure
agreement with Intel Corporation and may not be copied or disclosed except in
accordance with the terms of that agreement
Copyright(c) 2008-2015 Intel Corporation. All Rights Reserved.
**********************************************************************************/
#ifdef QUICKSYNC
#include "d3dAllocator.hpp"
#define MSDK_SAFE_FREE(X) \
{ \
if (X) { \
free(X); \
X = NULL; \
} \
}
#if defined(_WIN32) || defined(_WIN64)
#include <objbase.h>
#include <initguid.h>
#include <assert.h>
#include <d3d9.h>
#define D3DFMT_NV12 (D3DFORMAT) MAKEFOURCC('N', 'V', '1', '2')
#define D3DFMT_YV12 (D3DFORMAT) MAKEFOURCC('Y', 'V', '1', '2')
#define D3DFMT_P010 (D3DFORMAT) MAKEFOURCC('P', '0', '1', '0')
D3DFORMAT ConvertMfxFourccToD3dFormat(mfxU32 fourcc) {
switch (fourcc) {
case MFX_FOURCC_NV12:
return D3DFMT_NV12;
case MFX_FOURCC_YV12:
return D3DFMT_YV12;
case MFX_FOURCC_YUY2:
return D3DFMT_YUY2;
case MFX_FOURCC_RGB3:
return D3DFMT_R8G8B8;
case MFX_FOURCC_RGB4:
return D3DFMT_A8R8G8B8;
case MFX_FOURCC_P8:
return D3DFMT_P8;
case MFX_FOURCC_P010:
return D3DFMT_P010;
case MFX_FOURCC_A2RGB10:
return D3DFMT_A2R10G10B10;
default:
return D3DFMT_UNKNOWN;
}
}
D3DFrameAllocator::D3DFrameAllocator()
: m_decoderService(0), m_processorService(0), m_hDecoder(0), m_hProcessor(0), m_manager(0), m_surfaceUsage(0) {}
D3DFrameAllocator::~D3DFrameAllocator() { Close(); }
mfxStatus D3DFrameAllocator::Init(mfxAllocatorParams *pParams) {
D3DAllocatorParams *pd3dParams = 0;
pd3dParams = dynamic_cast<D3DAllocatorParams *>(pParams);
if (!pd3dParams) return MFX_ERR_NOT_INITIALIZED;
m_manager = pd3dParams->pManager;
m_surfaceUsage = pd3dParams->surfaceUsage;
return MFX_ERR_NONE;
}
mfxStatus D3DFrameAllocator::Close() {
if (m_manager && m_hDecoder) {
m_manager->CloseDeviceHandle(m_hDecoder);
m_manager = 0;
m_hDecoder = 0;
}
if (m_manager && m_hProcessor) {
m_manager->CloseDeviceHandle(m_hProcessor);
m_manager = 0;
m_hProcessor = 0;
}
return BaseFrameAllocator::Close();
}
mfxStatus D3DFrameAllocator::LockFrame(mfxMemId mid, mfxFrameData *ptr) {
if (!ptr || !mid) return MFX_ERR_NULL_PTR;
mfxHDLPair *dxmid = (mfxHDLPair *)mid;
IDirect3DSurface9 *pSurface = static_cast<IDirect3DSurface9 *>(dxmid->first);
if (pSurface == 0) return MFX_ERR_INVALID_HANDLE;
D3DSURFACE_DESC desc;
HRESULT hr = pSurface->GetDesc(&desc);
if (FAILED(hr)) return MFX_ERR_LOCK_MEMORY;
if (desc.Format != D3DFMT_NV12 && desc.Format != D3DFMT_YV12 && desc.Format != D3DFMT_YUY2 &&
desc.Format != D3DFMT_R8G8B8 && desc.Format != D3DFMT_A8R8G8B8 && desc.Format != D3DFMT_P8 &&
desc.Format != D3DFMT_P010 && desc.Format != D3DFMT_A2R10G10B10)
return MFX_ERR_LOCK_MEMORY;
D3DLOCKED_RECT locked;
hr = pSurface->LockRect(&locked, 0, D3DLOCK_NOSYSLOCK);
if (FAILED(hr)) return MFX_ERR_LOCK_MEMORY;
switch ((DWORD)desc.Format) {
case D3DFMT_NV12:
ptr->Pitch = (mfxU16)locked.Pitch;
ptr->Y = (mfxU8 *)locked.pBits;
ptr->U = (mfxU8 *)locked.pBits + desc.Height * locked.Pitch;
ptr->V = ptr->U + 1;
break;
case D3DFMT_YV12:
ptr->Pitch = (mfxU16)locked.Pitch;
ptr->Y = (mfxU8 *)locked.pBits;
ptr->V = ptr->Y + desc.Height * locked.Pitch;
ptr->U = ptr->V + (desc.Height * locked.Pitch) / 4;
break;
case D3DFMT_YUY2:
ptr->Pitch = (mfxU16)locked.Pitch;
ptr->Y = (mfxU8 *)locked.pBits;
ptr->U = ptr->Y + 1;
ptr->V = ptr->Y + 3;
break;
case D3DFMT_R8G8B8:
ptr->Pitch = (mfxU16)locked.Pitch;
ptr->B = (mfxU8 *)locked.pBits;
ptr->G = ptr->B + 1;
ptr->R = ptr->B + 2;
break;
case D3DFMT_A8R8G8B8:
case D3DFMT_A2R10G10B10:
ptr->Pitch = (mfxU16)locked.Pitch;
ptr->B = (mfxU8 *)locked.pBits;
ptr->G = ptr->B + 1;
ptr->R = ptr->B + 2;
ptr->A = ptr->B + 3;
break;
case D3DFMT_P8:
ptr->Pitch = (mfxU16)locked.Pitch;
ptr->Y = (mfxU8 *)locked.pBits;
ptr->U = 0;
ptr->V = 0;
break;
case D3DFMT_P010:
ptr->PitchHigh = (mfxU16)(locked.Pitch / (1 << 16));
ptr->PitchLow = (mfxU16)(locked.Pitch % (1 << 16));
ptr->Y = (mfxU8 *)locked.pBits;
ptr->U = (mfxU8 *)locked.pBits + desc.Height * locked.Pitch;
ptr->V = ptr->U + 1;
break;
}
return MFX_ERR_NONE;
}
mfxStatus D3DFrameAllocator::UnlockFrame(mfxMemId mid, mfxFrameData *ptr) {
if (!mid) return MFX_ERR_NULL_PTR;
mfxHDLPair *dxmid = (mfxHDLPair *)mid;
IDirect3DSurface9 *pSurface = static_cast<IDirect3DSurface9 *>(dxmid->first);
if (pSurface == 0) return MFX_ERR_INVALID_HANDLE;
pSurface->UnlockRect();
if (NULL != ptr) {
ptr->Pitch = 0;
ptr->Y = 0;
ptr->U = 0;
ptr->V = 0;
}
return MFX_ERR_NONE;
}
mfxStatus D3DFrameAllocator::GetFrameHDL(mfxMemId mid, mfxHDL *handle) {
if (!mid || !handle) return MFX_ERR_NULL_PTR;
mfxHDLPair *dxMid = (mfxHDLPair *)mid;
*handle = dxMid->first;
return MFX_ERR_NONE;
}
mfxStatus D3DFrameAllocator::CheckRequestType(mfxFrameAllocRequest *request) {
mfxStatus sts = BaseFrameAllocator::CheckRequestType(request);
if (MFX_ERR_NONE != sts) return sts;
if ((request->Type & (MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET | MFX_MEMTYPE_VIDEO_MEMORY_PROCESSOR_TARGET)) != 0)
return MFX_ERR_NONE;
else
return MFX_ERR_UNSUPPORTED;
}
mfxStatus D3DFrameAllocator::ReleaseResponse(mfxFrameAllocResponse *response) {
if (!response) return MFX_ERR_NULL_PTR;
mfxStatus sts = MFX_ERR_NONE;
if (response->mids) {
for (mfxU32 i = 0; i < response->NumFrameActual; i++) {
if (response->mids[i]) {
mfxHDLPair *dxMids = (mfxHDLPair *)response->mids[i];
static_cast<IDirect3DSurface9 *>(dxMids->first)->Release();
}
}
MSDK_SAFE_FREE(response->mids[0]);
}
MSDK_SAFE_FREE(response->mids);
return sts;
}
mfxStatus D3DFrameAllocator::AllocImpl(mfxFrameAllocRequest *request, mfxFrameAllocResponse *response) {
HRESULT hr;
if (request->NumFrameSuggested == 0) return MFX_ERR_UNKNOWN;
D3DFORMAT format = ConvertMfxFourccToD3dFormat(request->Info.FourCC);
if (format == D3DFMT_UNKNOWN) return MFX_ERR_UNSUPPORTED;
DWORD target;
if (MFX_MEMTYPE_DXVA2_DECODER_TARGET & request->Type) {
target = DXVA2_VideoDecoderRenderTarget;
} else if (MFX_MEMTYPE_DXVA2_PROCESSOR_TARGET & request->Type) {
target = DXVA2_VideoProcessorRenderTarget;
} else
return MFX_ERR_UNSUPPORTED;
IDirectXVideoAccelerationService *videoService = NULL;
if (target == DXVA2_VideoProcessorRenderTarget) {
if (!m_hProcessor) {
hr = m_manager->OpenDeviceHandle(&m_hProcessor);
if (FAILED(hr)) return MFX_ERR_MEMORY_ALLOC;
hr = m_manager->GetVideoService(m_hProcessor, IID_IDirectXVideoProcessorService, (void **)&m_processorService);
if (FAILED(hr)) return MFX_ERR_MEMORY_ALLOC;
}
videoService = m_processorService;
} else {
if (!m_hDecoder) {
hr = m_manager->OpenDeviceHandle(&m_hDecoder);
if (FAILED(hr)) return MFX_ERR_MEMORY_ALLOC;
hr = m_manager->GetVideoService(m_hDecoder, IID_IDirectXVideoDecoderService, (void **)&m_decoderService);
if (FAILED(hr)) return MFX_ERR_MEMORY_ALLOC;
}
videoService = m_decoderService;
}
mfxHDLPair *dxMids = (mfxHDLPair *)calloc(request->NumFrameSuggested, sizeof(mfxHDLPair));
mfxHDLPair **dxMidPtrs = (mfxHDLPair **)calloc(request->NumFrameSuggested, sizeof(mfxHDLPair *));
if (!dxMids || !dxMidPtrs) {
MSDK_SAFE_FREE(dxMids);
MSDK_SAFE_FREE(dxMidPtrs);
return MFX_ERR_MEMORY_ALLOC;
}
response->mids = (mfxMemId *)dxMidPtrs;
response->NumFrameActual = request->NumFrameSuggested;
if (request->Type & MFX_MEMTYPE_EXTERNAL_FRAME) {
for (int i = 0; i < request->NumFrameSuggested; i++) {
hr = videoService->CreateSurface(request->Info.Width, request->Info.Height, 0, format, D3DPOOL_DEFAULT,
m_surfaceUsage, target, (IDirect3DSurface9 **)&dxMids[i].first,
&dxMids[i].second);
if (FAILED(hr)) {
ReleaseResponse(response);
MSDK_SAFE_FREE(dxMids);
return MFX_ERR_MEMORY_ALLOC;
}
dxMidPtrs[i] = &dxMids[i];
}
} else {
safe_array<IDirect3DSurface9 *> dxSrf(new IDirect3DSurface9 *[request->NumFrameSuggested]);
if (!dxSrf.get()) {
MSDK_SAFE_FREE(dxMids);
return MFX_ERR_MEMORY_ALLOC;
}
hr = videoService->CreateSurface(request->Info.Width, request->Info.Height, request->NumFrameSuggested - 1, format,
D3DPOOL_DEFAULT, m_surfaceUsage, target, dxSrf.get(), NULL);
if (FAILED(hr)) {
MSDK_SAFE_FREE(dxMids);
return MFX_ERR_MEMORY_ALLOC;
}
for (int i = 0; i < request->NumFrameSuggested; i++) {
dxMids[i].first = dxSrf.get()[i];
dxMidPtrs[i] = &dxMids[i];
}
}
return MFX_ERR_NONE;
}
#endif // #if defined(_WIN32) || defined(_WIN64)
#endif
\ No newline at end of file
This diff is collapsed.
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#include "export.hpp"
#include "videoReader.hpp"
#include "netStreamReader.hpp"
#include "avWriter.hpp"
#include "libvideostitch/logging.hpp"
#include "libvideostitch/output.hpp"
#include "libvideostitch/plugin.hpp"
#include "libvideostitch/ptv.hpp"
#include "libvideostitch/status.hpp"
#include <ostream>
/** \name Services for reader plugin. */
//\{
extern "C" VS_PLUGINS_EXPORT VideoStitch::Potential<VideoStitch::Input::Reader>* createReaderFn(
VideoStitch::Ptv::Value const* config, VideoStitch::Plugin::VSReaderPlugin::Config runtime) {
if (VideoStitch::Input::FFmpegReader::handles(config->asString()) ||
VideoStitch::Input::netStreamReader::handles(config->asString())) {
auto potLibAvReader = VideoStitch::Input::LibavReader::create(config->asString(), runtime);
if (potLibAvReader.ok()) {
return new VideoStitch::Potential<VideoStitch::Input::Reader>(potLibAvReader.release());
} else {
return new VideoStitch::Potential<VideoStitch::Input::Reader>(potLibAvReader.status());
}
}
return new VideoStitch::Potential<VideoStitch::Input::Reader>{VideoStitch::Origin::Input,
VideoStitch::ErrType::InvalidConfiguration,
"Reader doesn't handle this configuration"};
}
extern "C" VS_PLUGINS_EXPORT bool handleReaderFn(VideoStitch::Ptv::Value const* config) {
if (config && config->getType() == VideoStitch::Ptv::Value::STRING) {
return (VideoStitch::Input::FFmpegReader::handles(config->asString()) ||
VideoStitch::Input::netStreamReader::handles(config->asString()));
} else {
return false;
}
}
extern "C" VS_PLUGINS_EXPORT VideoStitch::Input::ProbeResult probeReaderFn(std::string const& p_filename) {
return VideoStitch::Input::LibavReader::probe(p_filename);
}
//\}
/** \name Services for writer plugin. */
//\{
extern "C" VS_PLUGINS_EXPORT VideoStitch::Potential<VideoStitch::Output::Output>* createWriterFn(
VideoStitch::Ptv::Value const* config, VideoStitch::Plugin::VSWriterPlugin::Config run_time) {
VideoStitch::Output::Output* lReturn = nullptr;
VideoStitch::Output::BaseConfig baseConfig;
const VideoStitch::Status parseStatus = baseConfig.parse(*config);
if (parseStatus.ok()) {
lReturn = VideoStitch::Output::LibavWriter::create(*config, run_time.name, baseConfig.baseName, run_time.width,
run_time.height, run_time.framerate, run_time.rate,
run_time.depth, run_time.layout);
if (lReturn) {
return new VideoStitch::Potential<VideoStitch::Output::Output>(lReturn);
}
return new VideoStitch::Potential<VideoStitch::Output::Output>(
VideoStitch::Origin::Output, VideoStitch::ErrType::InvalidConfiguration, "Could not create av writer");
}
return new VideoStitch::Potential<VideoStitch::Output::Output>(
VideoStitch::Origin::Output, VideoStitch::ErrType::InvalidConfiguration,
"Could not parse AV Writer configuration", parseStatus);
}
extern "C" VS_PLUGINS_EXPORT bool handleWriterFn(VideoStitch::Ptv::Value const* config) {
bool lReturn(false);
VideoStitch::Output::BaseConfig baseConfig;
if (baseConfig.parse(*config).ok()) {
lReturn = (!strcmp(baseConfig.strFmt, "mp4") || !strcmp(baseConfig.strFmt, "mov"));
} else {
// TODOLATERSTATUS
VideoStitch::Logger::get(VideoStitch::Logger::Verbose) << "avPlugin: cannot parse BaseConfnig" << std::endl;
}
return lReturn;
}
//\}
#ifdef TestLinking
int main() {
/** This code is not expected to run: it's just a way to check all
required symbols will be in library. */
VideoStitch::Ptv::Value const* config = 0;
{
VideoStitch::Plugin::VSReaderPlugin::Config runtime;
createReaderFn(config, runtime);
}
handleReaderFn(config);
probeReaderFn(std::string());
VideoStitch::Plugin::VSWriterPlugin::Config runtime;
createWriterFn(config, runtime);
handleWriterFn(config);
return 0;
}
#endif
This diff is collapsed.
This diff is collapsed.
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#include "util.hpp"
#include "libvideostitch/logging.hpp"
namespace VideoStitch {
namespace Util {
void TimeoutHandler::logTimeout() {
Logger::get(Logger::Warning) << "[TimeoutHandler] Operation timed out, interrupting!" << std::endl;
}
} // namespace Util
} // namespace VideoStitch
This diff is collapsed.
This diff is collapsed.
if(DISABLE_BMP)
return()
endif(DISABLE_BMP)
set(SOURCE_FILES
bmpInput.cpp
export.cpp)
if(WINDOWS)
set(PLUGIN_NAME bmp)
else(WINDOWS)
set(PLUGIN_NAME bmpPlugin)
endif(WINDOWS)
vs_add_IO_library(${PLUGIN_NAME} SHARED ${SOURCE_FILES} $<TARGET_OBJECTS:common>)
include_lib_vs_headers(${PLUGIN_NAME})
target_include_directories(${PLUGIN_NAME} PRIVATE .)
target_include_directories(${PLUGIN_NAME} PRIVATE ../common/include)
set_property(TARGET ${PLUGIN_NAME} PROPERTY CXX_STANDARD 14)
link_target_to_libvideostitch(${PLUGIN_NAME})
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#include "muxer.hpp"
namespace VideoStitch {
namespace Output {
class FileMuxer : public Muxer {
public:
explicit FileMuxer(size_t index, const std::string& format, const std::string& filename,
std::vector<AVEncoder>& codecs, const AVDictionary*);
~FileMuxer();
virtual void writeTrailer();
virtual bool openResource(const std::string& filename);
private:
bool MP4WebOptimizerInternal(const std::string&);
bool reserved_moov;
};
} // namespace Output
} // namespace VideoStitch
This diff is collapsed.
This diff is collapsed.
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
#pragma once
bool qt_faststart(const char *srcFile, const char *dstFile, const uint32_t nb_channels);
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
// Copyright (c) 2012-2017 VideoStitch SAS
// Copyright (c) 2018 stitchEm
/* Workaround hack to fix a link problem when generating a shared lib
on linux. */
extern "C" {
void *__dso_handle = 0;
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment