Compare commits

...

327 Commits
test ... master

Author SHA1 Message Date
Christian Clauss
df129c7ba3
Let's implicitly fix a typo (#681) 2022-07-11 14:03:07 -07:00
Christian Clauss
35886c970c
Upgrade GitHub Actions again (#679) 2022-07-11 14:02:31 -07:00
Karl Kroening
ef00863269
Fix Black in GHA for Python 2.7 (#680)
(At least until Python 2.7 support is finally eliminated)
2022-07-11 13:51:06 -07:00
Christian Clauss
ed70f2e619
Upgrade GitHub Actions (#643) 2022-07-11 13:39:36 -07:00
lcjh
fc41f4aa84
Fix heigth -> height typo (#596)
Co-authored-by: Karl Kroening <karlk@kralnet.us>
2022-03-07 01:55:30 -08:00
Karthikeyan Singaravelan
6189cd6861
Import ABC from collections.abc for Python 3.9+ compatibility (#330)
* Import ABC from collections.abc instead of collections for Python 3.9 compatibility.

* Fix deprecation warnings due to invalid escape sequences.

* Support Python 3.10

Co-authored-by: Karl Kroening <karlk@kralnet.us>
2022-03-07 01:46:52 -08:00
Karl Kroening
cb9d400467
Add FFmpeg installation instructions (#642)
Co-authored-by: digitalcircuits <59550818+digitalcircuits@users.noreply.github.com>
Co-authored-by: digitalcircuits <digitalcircuits@github>
2022-03-07 01:19:09 -08:00
Karl Kroening
29b6f09298
Use GitHub Actions for CI. (#641)
This sets up GitHub Actions (GHA) to run in place of the
currently broken Travis CI.  Initially, this only covers running
tox/pytest and Black, but may eventually be extended to run pylint,
mypy, flake8, etc. - see #605, for example.

Notes:
* Python 3.10 is not yet supported due to the `collections.Iterable`
  issue discussed in #330, #624, etc.
* The Black CI step acts as a linting step, rather than attempting to
  have the GHA job automatically update/commit/push the reformarted
  code.
* Black is currently pinned to an older version that supports
  `--target-version py27` until Python 2 compatibility can be dropped in
  the final Python 2 compatibility release of ffmpeg-python.
* Only the main source directory (`ffmpeg/`) is checked with Black at
  the moment.  The `examples/` directory should also be checked, but
  will be done as a separate PR.

Co-authored by: Christian Clauss <cclauss@me.com>
2022-03-07 00:05:43 -08:00
Karl Kroening
fd1da13f11
Re-apply Black formatting, and wrap docstrings at ~88 columns. (#639) 2022-03-06 13:24:40 -08:00
Davide Depau
f3079726fa
Merge pull request #494 from kkroening/revert-493-revert-430-master
Revert "Revert "Implemented cwd parameter""
2021-02-16 23:27:34 +01:00
Davide Depau
807aaccb14
Revert "Revert "Implemented cwd parameter"" 2021-02-16 23:26:53 +01:00
Davide Depau
cc1f664fb4
Merge pull request #493 from kkroening/revert-430-master
Revert "Implemented cwd parameter"
2021-02-16 22:55:10 +01:00
Davide Depau
c764166f44
Revert "Implemented cwd parameter" 2021-02-16 22:54:27 +01:00
Davide Depau
4974364d17
Merge pull request #430 from Jacotsu/master
Implemented cwd parameter
2021-02-15 18:45:52 +01:00
Davide Depau
80e99cbb38
Merge pull request #433 from 372046933/patch-1
Fix typo in _run.py docstring
2020-12-13 05:24:28 +01:00
372046933
861b453b43 Fix typo in _run.py docstring 2020-12-11 16:30:12 +08:00
Davide Depau
0612a44f91
Merge pull request #417 from 0x3333/master
Fix issue #195. Redirect stdout/err to /dev/null
2020-12-06 23:54:08 +01:00
Davide Depau
e044890010
Merge pull request #440 from raulpy271/master
adding http server example
2020-12-03 17:01:40 +01:00
raulpy271
5b6b58308f
Replacing server_url content, "http://localhost:8080" to "http://127.0.0.1:8080". 2020-12-02 14:19:53 -03:00
Davide Depau
2931580908
Merge pull request #442 from revolter/patch-1
Add command line arguments FAQ
2020-12-02 15:57:14 +01:00
Iulian Onofrei
08e50ac02c
Add command line arguments FAQ 2020-11-08 12:51:35 +02:00
Raul
15ffcc0c72 adding http server example 2020-10-30 18:41:59 -03:00
Tercio Gaudencio Filho
0ec6e69d88
Redirect stderr to stdout and stdout to DEVNULL when quiet is requested. 2020-10-30 17:29:44 -03:00
Jacotsu
17995f5ff3 Updated test to check the new cwd parameter 2020-10-09 16:53:17 +02:00
Jacotsu
b64f40a8b5 Fixed typo in cwd parameter usage 2020-10-07 18:56:05 +02:00
Jacotsu
96fb3ff050 Added parameter to set ffmpeg's working directory 2020-10-07 18:52:15 +02:00
Tercio Gaudencio Filho
c12e8890ad
Fix issue #195. Redirect stdout/stderr to DEVNULL when quiet is requested. 2020-09-15 22:08:51 -03:00
Karl Kroening
4cb7d26f55
Merge pull request #283 from magnusvmt/master
Add optional timeout argument to probe
2019-12-30 02:23:02 -06:00
Karl Kroening
8809a54cff
Merge pull request #245 from cclauss/patch-1
Travis CI and tox.ini: Add Python 3.7 to the testing
2019-12-30 02:20:24 -06:00
Karl Kroening
b14785b61b
Merge pull request #247 from kylemcdonald/patch-1
tensorflow streaming typo
2019-12-30 02:19:42 -06:00
magnusvmt
2d3a078f24 Add test for probe timeout and fix for Python2
Fix for Python2 so that timeout is only used as keyword argument if it
is provided

Added a test for the new timeout argument that will run for Python >
3.3.
2019-11-02 16:17:17 +01:00
Karl Kroening
3cef431045
Merge pull request #248 from kylemcdonald/master
Added mono to stereo example
2019-11-02 02:05:25 -05:00
Karl Kroening
d1f1b64aa9
Merge pull request #287 from hp310780/issue239_typographicalerror
Fix for Issue 239 - Fixed typographical error
2019-11-02 01:46:50 -05:00
Karl Kroening
24633404df
Merge pull request #286 from cmehay/duplicate_parameters
Duplicate parameters can be set in kwargs with an iterator
2019-11-02 01:46:20 -05:00
Harshna Patel
a8e0954f41 Fixed typographical error 2019-10-31 20:07:38 +00:00
Christophe Mehay
2a7a2d78c9 Duplicate parameters can be set in kwargs with an iterator
For instance, add multiple `streamid` in the output can be done like this:
ffmpeg.input(url).output('video.mp4', streamid=['0:0x101', '1:0x102'])

will output this command line
ffmpeg -i video.mp4 -streamid 0:0x101 -streamid 1:0x102 video.mp4
2019-10-31 12:09:47 +01:00
magnusvmt
82a00e4849 Add optional timeout argument to probe
Popen.communicate() supports a timeout argument which is useful in case
there is a risk that the probe hangs.
2019-10-28 16:28:51 +01:00
Kyle McDonald
c99e97b687 added mono to stereo example 2019-08-04 15:18:55 -07:00
Kyle McDonald
bbd56a35a3
tensorflow streaming typo 2019-08-04 15:09:40 -07:00
Christian Clauss
ed9b7f8804
tox.ini: Add py37 to the testing
Python 3.4 is end of life...  Should we drop support for it?
2019-07-31 06:32:29 +02:00
Christian Clauss
c6c2dfdc28
Travis CI: Add Python 3.7 to the tests
Python 3.4 is end of life...  Should we drop support for it?
2019-07-31 06:30:22 +02:00
Karl Kroening
78fb6cf2f1 Release 0.2.0 2019-07-05 19:17:30 -05:00
Karl Kroening
1c9695d2a0 Run Black code formatter 2019-07-05 19:16:43 -05:00
Karl Kroening
c14efc9a19
Merge pull request #230 from komar007/fix_multiple_output_order
Fix multiple output order
2019-07-05 19:15:23 -05:00
Michal Trybus
732bf21397 Label-based order of outputs from multiple-output filters 2019-07-02 10:01:48 +02:00
Michal Trybus
faca0ee87b Test against wrong order of outputs from multiple-output filters 2019-07-02 10:01:47 +02:00
Karl Kroening
63973d0b29
Update README.md 2019-06-28 22:14:44 -05:00
Karl Kroening
ab42ab4dfc
Merge pull request #213 from kkroening/black
Use Black formatter
2019-06-10 17:52:48 -05:00
Karl Kroening
fff79e6b93 Merge remote-tracking branch 'origin/master' into black
Conflicts:
	setup.py
2019-06-10 17:42:39 -05:00
Karl Kroening
1b2634291d Release 0.1.18 2019-06-03 04:06:26 -05:00
Karl Kroening
46eeb41705 Use Black formatter 2019-06-03 04:05:24 -05:00
Karl Kroening
8ea0f4ca4b Use Black formatter 2019-06-03 04:03:37 -05:00
Karl Kroening
a1e1f30a99
Update README.md 2019-06-03 03:52:07 -05:00
Karl Kroening
2db3c4a3ce
Update README.md 2019-06-03 03:30:04 -05:00
Karl Kroening
bde72f4124
Update README.md 2019-06-03 03:09:27 -05:00
Karl Kroening
411b0a14ff
Update README.md 2019-06-03 02:43:04 -05:00
Karl Kroening
49c877eec6
Update README.md 2019-06-03 02:42:29 -05:00
Karl Kroening
995cf67d7d Update docs 2019-06-03 01:54:35 -05:00
Karl Kroening
a3bac57d0a
Merge pull request #212 from kkroening/av-ops
Add `.audio` + `.video` operators
2019-06-03 01:52:58 -05:00
Karl Kroening
5c4a5c720f Include Stream class in API docs 2019-06-03 01:32:44 -05:00
Karl Kroening
881ae4efff Add .audio + .video operators 2019-06-03 01:11:48 -05:00
Karl Kroening
41daf9a953
Merge pull request #209 from JDLH/typo_filters_doc
Fix typo "fmpeg-python" to read "ffmpeg-python"
2019-05-19 04:26:12 -05:00
Jim DeLaHunt
35695d93d4
Fix typo "fmpeg-python" to read "ffmpeg-python" 2019-05-18 00:13:04 -07:00
Karl Kroening
ac111dc3a9
Merge pull request #189 from apatsekin/patch-1
Update README.md
2019-04-18 02:06:13 -05:00
Karl Kroening
1f3ce1e2aa
Merge pull request #188 from akolpakov/ffprobe_extra_args
Ability to accept extra arguments for ffmpeg.probe command (Issue #187)
2019-04-18 02:05:08 -05:00
Karl Kroening
e61653144b
Merge pull request #194 from kkroening/fix-ci
Fix CI issue with ffmpeg download link
2019-04-18 01:58:49 -05:00
Karl Kroening
f641413ed8 Fix CI issue with ffmpeg download link 2019-04-18 01:56:29 -05:00
apatsekin
754d2b7227
Update README.md
non-existing parameter
2019-04-15 13:52:29 -04:00
Andrey Kolpakov
3ddc5f04bf Ability to accept extra arguments for ffmpeg.probe command (Issue #187) 2019-04-11 10:52:26 +02:00
Karl Kroening
eef4da1b27 Slight reordering in readme 2018-11-25 21:59:33 -06:00
Karl Kroening
efae325834 Add another example link in the main readme 2018-11-25 21:57:32 -06:00
Karl Kroening
8de06a2d7a Release 0.1.17 2018-11-25 21:51:48 -06:00
Karl Kroening
c53907acaf Update docs 2018-11-25 21:51:19 -06:00
Karl Kroening
629202806e
Merge pull request #144 from kkroening/run-async
Add `run_async` operator
2018-11-25 21:50:50 -06:00
Karl Kroening
7ed9adf483 Fix example readme typo 2018-11-25 21:50:01 -06:00
Karl Kroening
413b71a4e8 Fix RTSP/TCP socket example to use consistent indentation 2018-11-25 21:43:41 -06:00
Karl Kroening
e5e293fca4 Fix mock import for python3 2018-11-25 21:42:43 -06:00
Karl Kroening
462e34bab3 Add run_async operator 2018-11-25 21:32:04 -06:00
Karl Kroening
4276899cea Merge remote-tracking branch 'nitaym/return-immediately' 2018-11-25 21:26:34 -06:00
Karl Kroening
8d7ec92780
Merge pull request #143 from kkroening/tensorflow-stream-example
Add tensorflow-stream example
2018-11-25 04:39:04 -06:00
Karl Kroening
c533687d81
Update README.md 2018-11-25 04:38:55 -06:00
Karl Kroening
b1d167ccb8 Add facetime example 2018-11-25 04:36:55 -06:00
Karl Kroening
848f87da4e Add numpy_stream example 2018-11-25 03:54:56 -06:00
Karl Kroening
e0eb332e18
Merge pull request #126 from RPing/master
small fix for document
2018-09-17 05:40:22 -05:00
Stephen Chen
6b5a1f2612 small fix for document 2018-09-16 01:16:11 +08:00
Karl Kroening
d47890aebd
Merge pull request #123 from laurentalacoque/master
Added ffmpeg.probe 'cmd' argument
2018-09-10 19:18:30 -05:00
laurentalacoque
5acc6da7ab
Added ffmpeg.probe 'cmd' argument
ffmpeg.<stream>.run() method has a `cmd` argument for selecting `ffmpeg` executable.
This simple hack adds this feature to the probe command
2018-09-07 10:53:50 +02:00
Nitay Megides
ce33461259 Added an option to return immediately from run() so you could process the output of ffmpeg while it's processing 2018-07-27 04:31:32 +03:00
Karl Kroening
b6f150c4c3 Release 0.1.16 2018-07-16 01:30:18 +02:00
Karl Kroening
0625e3802b Update docs 2018-07-16 01:28:36 +02:00
Karl Kroening
6523c46fa4
Merge pull request #104 from kkroening/filter
Use `filter` as the canonical name for `filter_`
2018-07-16 01:27:39 +02:00
Karl Kroening
9fc654733a Merge remote-tracking branch 'origin/master' into filter 2018-07-16 01:10:25 +02:00
Karl Kroening
71fb7435fa
Merge pull request #103 from kkroening/passthrough
Fix `-map` to not use brackets for passthroughs; fixes #102, #23
2018-07-16 01:10:02 +02:00
Karl Kroening
3cf993e910 Add filter operator 2018-07-13 04:43:56 +02:00
Karl Kroening
217bd2bde6 Fix -map to not use brackets for passthroughs; fixes #102, #23 2018-07-13 04:28:21 +02:00
Karl Kroening
5916891bbf Merge remote-tracking branch 'origin/master' 2018-07-13 04:06:26 +02:00
Karl Kroening
c65c03b869 Update docs 2018-07-13 04:06:15 +02:00
Karl Kroening
ca85cbbbd3
Add examples 2018-07-13 03:58:30 +02:00
Karl Kroening
06d4a6fa09 Merge remote-tracking branch 'origin/master' 2018-07-13 03:52:30 +02:00
Karl Kroening
7d1ac28296 Add glob example graphs 2018-07-13 03:50:01 +02:00
Karl Kroening
ce06215af5
Update README.md 2018-07-04 00:16:41 -07:00
Karl Kroening
ae4b6a964d
Update README.md 2018-07-04 00:12:18 -07:00
Karl Kroening
f1e8201ba6
Update examples readme 2018-07-03 23:55:58 -07:00
Karl Kroening
435e574f5a
Update README.md 2018-06-30 03:00:45 -07:00
Karl Kroening
f5689ba156 Release 0.1.15 2018-06-30 02:34:51 -07:00
Karl Kroening
4ec72e0669
Merge pull request #98 from kkroening/example-graphs
Add example graphs
2018-06-30 02:33:02 -07:00
Karl Kroening
4a39bafe20
Add graphs in examples readme 2018-06-30 02:32:42 -07:00
Karl Kroening
02446b0298 Add example graphs 2018-06-30 02:24:22 -07:00
Karl Kroening
7c872c56b9
Merge pull request #97 from kkroening/concat-av
Support `concat` a/v params
2018-06-30 02:23:47 -07:00
Karl Kroening
0f18c75dab Support concat a/v params 2018-06-30 02:17:00 -07:00
Karl Kroening
c4ebc01978
Update README.md 2018-06-28 00:14:30 -07:00
Karl Kroening
86d2a3b5e4
Update README.md 2018-06-28 00:07:58 -07:00
Karl Kroening
c6ab775d6f
Update README.md 2018-06-27 23:55:38 -07:00
Karl Kroening
5474c73d3f
Update README.md 2018-06-27 23:54:24 -07:00
Karl Kroening
122e7bc2e2 Release 0.1.14 2018-06-27 23:51:50 -07:00
Karl Kroening
3834d27a2a
Update README.md 2018-06-27 23:49:54 -07:00
Karl Kroening
5da7cdd1ea
Update README.md 2018-06-27 23:43:12 -07:00
Karl Kroening
be834e04b3
Update examples readme 2018-06-27 23:42:32 -07:00
Karl Kroening
3520e9318d
Merge pull request #94 from kkroening/jupyter-demo
Add jupyter demo
2018-06-27 23:41:41 -07:00
Karl Kroening
2c9b39214e
Update examples readme 2018-06-27 23:41:14 -07:00
Karl Kroening
a82f1c93ca Add jupyter demo 2018-06-27 23:37:39 -07:00
Karl Kroening
a0caa7b017
Merge pull request #93 from kkroening/view-pipe
Add `pipe` param to `view`
2018-06-27 23:36:31 -07:00
Karl Kroening
4f97d1d679 Add pipe param to view 2018-06-27 23:35:54 -07:00
Karl Kroening
e6d09f532a Update setup.py description 2018-06-27 22:53:37 -07:00
Karl Kroening
fc9fd015e6 Release version 0.1.13 2018-06-27 22:50:58 -07:00
Karl Kroening
57eeea2356
Merge pull request #87 from kkroening/feature-80
Add `video_bitrate` and `audio_bitrate` params
2018-06-17 21:34:54 -05:00
Karl Kroening
0cc0bfaaaa Merge remote-tracking branch 'origin/master' into feature-80
Conflicts:
	ffmpeg/_run.py
	ffmpeg/tests/test_ffmpeg.py
2018-06-16 14:42:41 -05:00
Karl Kroening
1681e8c8df
Update README.md 2018-06-15 23:24:24 -05:00
Karl Kroening
3be068ae1a
Update README.md 2018-06-15 23:21:52 -05:00
Karl Kroening
fbaab995b7 Update jupyter screenshot 2018-06-02 02:45:24 -07:00
Karl Kroening
bc07a19090
Update examples readme 2018-06-02 02:41:30 -07:00
Karl Kroening
f2548aa7e5
Merge pull request #89 from kkroening/examples-readme
Update examples README.md
2018-06-02 02:40:06 -07:00
Karl Kroening
e5276f4297
Update examples readme 2018-06-02 02:39:50 -07:00
Karl Kroening
01d0945689 Merge remote-tracking branch 'origin/examples-readme' into examples-readme 2018-06-02 02:36:54 -07:00
Karl Kroening
1cf6dcb81a
Update examples readme 2018-06-02 02:36:45 -07:00
Karl Kroening
8f0f53411f Add jupyter screenshot 2018-06-02 02:36:08 -07:00
Karl Kroening
5af85c889d
Update examples readme 2018-06-02 02:31:33 -07:00
Karl Kroening
bf87179efe
Update README.md 2018-06-02 02:21:59 -07:00
Karl Kroening
ef390042e5 Add initial examples readme 2018-06-02 02:19:58 -07:00
Karl Kroening
de124673e0 Update examples 2018-06-02 02:17:56 -07:00
Karl Kroening
6364513485 Fix get_video_thumbnail example 2018-06-02 01:05:53 -07:00
Karl Kroening
4a70f6a868 Add get_video_thumbnail example 2018-06-02 01:04:39 -07:00
Karl Kroening
6274e7abf9 Add video_info example 2018-06-02 00:51:10 -07:00
Karl Kroening
e1dded89b1 Add read_frame_as_jpeg example 2018-06-02 00:42:51 -07:00
Karl Kroening
e294bd7753 Merge remote-tracking branch 'origin/master' 2018-06-02 00:33:58 -07:00
Karl Kroening
a01f68e8af
Merge pull request #88 from kkroening/output-video-size
Support video_size output tuple
2018-06-02 00:33:28 -07:00
Karl Kroening
3503f7301b Add ffmpeg-numpy.ipynb example 2018-06-02 00:32:59 -07:00
Karl Kroening
54db6e4272
Update README.md 2018-06-02 00:30:24 -07:00
Karl Kroening
c21e8c103f Support video_size output tuple 2018-06-02 00:25:47 -07:00
Karl Kroening
0ed77a30c7 #80: add video_bitrate and audio_bitrate params 2018-06-01 23:41:24 -07:00
Karl Kroening
593cd3e790 Release 0.1.12 2018-06-01 22:58:40 -07:00
Karl Kroening
e6fd3ff7c8
Merge pull request #85 from kkroening/inout
Add input/output support in `run` command; update docs
2018-05-20 01:46:58 -07:00
Karl Kroening
6a2d3381b7 Fix tests 2018-05-20 01:43:43 -07:00
Karl Kroening
9a487e8603 Fix exception params 2018-05-20 01:29:59 -07:00
Karl Kroening
940b05f3fc Fix ffprobe exception test 2018-05-20 01:21:30 -07:00
Karl Kroening
4558c25ced Update tox.ini 2018-05-20 01:17:54 -07:00
Karl Kroening
8711e16c2d Merge remote-tracking branch 'origin/master' into inout 2018-05-20 01:14:48 -07:00
Karl Kroening
57b8f9fa22 Pull docs from #85 2018-05-20 01:14:10 -07:00
Karl Kroening
ac57e2df13 Add input/output support in run command; update docs 2018-05-20 01:13:07 -07:00
Karl Kroening
90561c7a8a Release version 0.1.11 2018-05-09 03:31:04 -05:00
Karl Kroening
2a2d5a43f1
Merge pull request #83 from kkroening/feature-30
#30: add `global_args` operator
2018-05-09 03:29:24 -05:00
Karl Kroening
84355d419c #30: re-futurize 2018-05-09 03:15:12 -05:00
Karl Kroening
3e68bc8c9a #30: add global_args operator 2018-05-09 03:09:21 -05:00
Karl Kroening
7077eaad64
Merge pull request #67 from kkroening/probe
Add ffprobe support
2018-05-09 01:28:51 -05:00
Karl Kroening
8420f3b813
Merge pull request #45 from Depau/stream_selectors
Stream selectors, `.map` operator (audio support)
2018-05-09 01:28:10 -05:00
Karl Kroening
c162eab2a9 Update docs 2018-05-09 01:26:28 -05:00
Karl Kroening
1e63419a93 Test bad stream selectors 2018-03-11 21:33:32 -07:00
Karl Kroening
57abf6e86e Change selector syntax from [:a] to [a]; remove map operator (for now) 2018-03-11 21:27:26 -07:00
Karl Kroening
6169b89321 Minor improvements (formatting, etc) 2018-03-11 20:03:06 -07:00
Karl Kroening
809ab6cd17 Merge remote-tracking branch 'origin/master' into stream_selectors 2018-03-10 19:01:57 -08:00
Davide Depau
4927bbeea9
Fix string type inconsistency error in ffprobe test 2018-03-08 23:02:04 +01:00
Davide Depau
87a168a063
Do not use Exception.message, use str(Exception) instead 2018-03-08 22:52:06 +01:00
Karl Kroening
2fff94af6c Fix probe exception handling and add test 2018-01-27 22:51:05 -08:00
Karl Kroening
a029d7aacc
Merge pull request #46 from Depaulicious/asplit_filter
Add `asplit` filter
2018-01-27 22:22:19 -08:00
Karl Kroening
6ebda44a63
Merge pull request #58 from 153957/patch-1
Cleanup graph source file after rendering graph to pdf
2018-01-27 22:21:27 -08:00
Karl Kroening
628bcc145e
Merge pull request #65 from kkroening/Depaulicious-patch-1
Add `filter_multi_output` to `__all__` so it's available in API
2018-01-27 21:40:31 -08:00
Karl Kroening
24e737f78e Fix ffprobe string decoding 2018-01-27 21:32:11 -08:00
Karl Kroening
25bda398c9 Add ffprobe support 2018-01-27 21:21:05 -08:00
Davide Depau
f1e418be4c
Add filter_multi_output to __all__ so it's available in API 2018-01-26 14:38:57 +01:00
Arne de Laat
50c4a8985d
Cleanup graph source file after rendering graph to pdf 2018-01-16 22:10:41 +01:00
Karl Kroening
19f316e9c5 Release version 0.1.10 2018-01-13 21:07:46 -08:00
Karl Kroening
09e6e469a7
Merge pull request #52 from kkroening/split-silence
Add examples: `split_silence` + `transcribe`
2018-01-13 16:16:32 -08:00
Davide Depau
ef9b102676
Merge branch 'master' into stream_selectors 2018-01-12 21:55:38 +01:00
Karl Kroening
3a818cc33d
Update requirements.txt 2018-01-11 22:22:31 -08:00
Karl Kroening
0672fd0e19
Merge pull request #56 from kkroening/gh-license
Update LICENSE with full license text
2018-01-11 22:20:45 -08:00
Karl Kroening
273bfd0ec6
Merge pull request #55 from Depaulicious/syntax_highlight
Add syntax highlighting to README.md
2018-01-11 22:20:12 -08:00
Davide Depau
221f57428d
Update LICENSE with full license text
This makes sure it is detected by GitHub and shown in the interface
2018-01-10 11:41:28 +01:00
Davide Depau
c87fd5cf56
Add tests for asplit filter 2018-01-10 10:38:42 +01:00
Davide Depau
e7fbb288d4
Fix name of asplit filter 2018-01-10 10:35:23 +01:00
Davide Depau
e70984065d
Add syntax highlighting to README.md 2018-01-10 10:20:56 +01:00
Davide Depau
ea90d91dfe
Expand unclear one-line implicit conditional statement 2018-01-09 16:04:33 +01:00
Davide Depau
df9bd7316f
Replace past.builtins.basestring with a custom one to workaround bug in Ubuntu's Python3 2018-01-09 16:04:33 +01:00
Davide Depau
497105f929
Reimplement .map logic making Node immutable 2018-01-09 16:04:24 +01:00
Davide Depau
861980db0b
Simplify, expand and explain complicated loops in dag.py 2018-01-09 15:47:09 +01:00
Davide Depau
783bdbdb37
Remove unused imports 2018-01-09 15:47:09 +01:00
Davide Depau
1a46471553
Remove useless _get_stream_name function 2018-01-09 15:47:09 +01:00
Davide Depau
1070b3e51b
Remove commented code 2018-01-09 15:47:09 +01:00
Davide Depau
03762a5cc5
Expand complicated format + list comprehension into its own function 2018-01-09 15:47:09 +01:00
Davide Depau
b4503a183c
Allow output to be created without mapped streams 2018-01-09 15:47:09 +01:00
Davide Depau
db83137f53
Add tests for stream selection and mapping 2018-01-09 15:47:08 +01:00
Davide Depau
90652306ea
Explicitly include -map [0] when output has multiple mapped streams 2018-01-09 15:47:08 +01:00
Davide Depau
0d95d9b58d
Implement .map() operator, allow multiple streams in .output() 2018-01-09 15:47:08 +01:00
Davide Depau
aa0b0bbd03
Generate multiple -map for outputs with multiple incoming edges 2018-01-09 15:47:08 +01:00
Davide Depau
44091f8a4a
Allow outputs to be created empty; streams can be mapped later 2018-01-09 15:47:08 +01:00
Davide Depau
c2c6a864d2
Make sure item is instance of slice in __getitem__ 2018-01-09 15:47:08 +01:00
Davide Depau
3f671218a6
Take into account upstream selectors in topological sort, get_args() and view() 2018-01-09 15:47:08 +01:00
Davide Depau
f6d014540a
Add __getitem__ to Stream too, simplify selector syntax
* No need to have a split node in between, you can just do stream[:"a"]
* Split nodes are still needed to do actual splitting.
2018-01-09 15:47:08 +01:00
Davide Depau
646a0dcae8
Implement selectors in Stream and Node
* Selectors are used just like 'split', i.e. `stream.split()[0:"audio"]`
2018-01-09 15:47:08 +01:00
Davide Depau
273cf8f205
Allow extra, unhashed objects to be added to the incoming_edge_map 2018-01-09 15:47:07 +01:00
Karl Kroening
6de40d80c5
Update README.md 2018-01-09 15:47:07 +01:00
Karl Kroening
9183458527
Update README.md 2018-01-09 15:47:07 +01:00
Karl Kroening
6dd34a21d5
Update README.md 2018-01-09 15:47:07 +01:00
Karl Kroening
63e4218725
Update README.md 2018-01-09 15:47:07 +01:00
Karl Kroening
2a95d6f2e1
Update README.md 2018-01-09 15:47:07 +01:00
Karl Kroening
be3f300de5
Update README.md 2018-01-09 15:47:07 +01:00
Karl Kroening
0e47a6a6e5
Update README.md 2018-01-09 15:47:07 +01:00
Karl Kroening
e66a0939c4
Update README.md 2018-01-09 15:47:07 +01:00
Karl Kroening
d904e153cf
Update README.md 2018-01-09 15:47:06 +01:00
Karl Kroening
fc86457eb8
Update README.md 2018-01-09 15:47:06 +01:00
Karl Kroening
fb7fe24873
Update README.md 2018-01-09 15:47:06 +01:00
Karl Kroening
f2a37d3eac
Update README.md 2018-01-09 15:47:06 +01:00
Karl Kroening
614a558266
Update README.md 2018-01-09 15:47:06 +01:00
Karl Kroening
65a068267b
Update README.md 2018-01-09 15:47:06 +01:00
Karl Kroening
db89774454
Update README.md 2018-01-09 15:47:06 +01:00
Noah Stier
fabd401e96
Add 'crop' filter 2018-01-09 15:47:06 +01:00
Karl Kroening
0682a3e2b2
Merge pull request #53 from kkroening/fix-starttime0
Fix issue with input ss=0
2018-01-07 23:41:20 -08:00
Karl Kroening
338a1286f7 Fix issue with start_time=0 2018-01-07 23:26:53 -08:00
Karl Kroening
5b813cdecf
Merge pull request #51 from kkroening/compile
Add `compile` operator
2018-01-07 18:32:18 -08:00
Karl Kroening
f5f7ee2073 Improve logging in split_silence; add transcribe example 2018-01-07 04:43:05 -08:00
Karl Kroening
ad58a38d59 Finalize split_silence 2018-01-07 03:43:20 -08:00
Karl Kroening
4311e33859 Add split_silence example 2018-01-07 03:32:05 -08:00
Karl Kroening
f818cffc55 Add compile operator 2018-01-07 02:28:13 -08:00
Karl Kroening
940e3e7681
Merge pull request #50 from kkroening/drawtext-doc
Fix drawtext documentation
2018-01-06 16:37:43 -08:00
Karl Kroening
9f7de34f2e Remove logo_with_border.png 2018-01-06 11:36:26 -08:00
Karl Kroening
1c09e1c39f Fix drawtext documentation 2018-01-06 11:35:20 -08:00
Karl Kroening
a599899e19 Update formula image 2017-12-29 20:38:15 -08:00
Karl Kroening
7781120d1c Add text to formula image 2017-12-29 20:35:58 -08:00
Karl Kroening
263732b880 Merge remote-tracking branch 'origin/master' 2017-12-29 20:29:06 -08:00
Karl Kroening
f114157655 Update logo+formula images 2017-12-29 20:28:51 -08:00
Karl Kroening
43553ba9d9
Use github for image hosting 2017-12-26 13:03:45 -06:00
Karl Kroening
2b6babed08 Revert "Merry Birthday"
ibin links went dead. Reverting for now.

This reverts commit c74c8fd07de332790a2f36617c26b9fda5d4143e.
2017-12-25 21:27:14 -06:00
Karl Kroening
36a2261f06
Merge pull request #48 from lloti/patch-1
Merry Birthday
2017-12-25 21:24:47 -06:00
Dim
c74c8fd07d
Merry Birthday
Maybe a nicer logo for the new year?
2017-12-25 03:39:01 -05:00
Karl Kroening
e1c93044d5
Merge pull request #47 from kkroening/remove-py33
Remove python 3.3 support
2017-12-23 14:14:00 -06:00
Karl Kroening
fc44ffabf0 Add logo/formula 2017-12-23 00:49:32 -06:00
Karl Kroening
a3c9b05edd Remove python 3.3 support since pytest seems to no longer support 3.3 2017-12-23 00:09:02 -06:00
Davide Depau
755fb843de
Also provide the number of splits to asplit filter 2017-12-22 17:11:23 +01:00
Davide Depau
7bc77ff714
Add asplit filter 2017-12-22 16:22:41 +01:00
Karl Kroening
d59bb7a592 Bump version 2017-11-20 00:35:04 -08:00
Karl Kroening
562be84048 Bump version 2017-11-20 00:34:17 -08:00
Karl Kroening
697808d743
Update README.md 2017-11-03 23:46:50 -07:00
Karl Kroening
74f0eabf52
Update README.md 2017-11-03 23:45:53 -07:00
Karl Kroening
a30db324d3
Update README.md 2017-11-03 23:42:16 -07:00
Karl Kroening
f1607d31f7
Update README.md 2017-11-03 23:41:01 -07:00
Karl Kroening
e212c354f1
Update README.md 2017-11-03 23:37:53 -07:00
Karl Kroening
930ebfc158
Update README.md 2017-11-03 23:24:38 -07:00
Karl Kroening
e8daf8e61a
Update README.md 2017-11-03 23:11:03 -07:00
Karl Kroening
becc50672d
Update README.md 2017-11-03 23:08:01 -07:00
Karl Kroening
13cee5ee30
Update README.md 2017-11-03 23:07:09 -07:00
Karl Kroening
c3a9e84eef
Update README.md 2017-11-03 23:06:26 -07:00
Karl Kroening
38079c3d04
Update README.md 2017-11-03 23:06:16 -07:00
Karl Kroening
f729fbaec7
Update README.md 2017-11-03 23:05:23 -07:00
Karl Kroening
e04d5e4e9d
Update README.md 2017-11-03 22:41:55 -07:00
Karl Kroening
8a93026773
Update README.md 2017-11-03 21:12:29 -07:00
Karl Kroening
84849ea814
Update README.md 2017-11-03 21:12:11 -07:00
Karl Kroening
a243609453
Merge pull request #31 from noahstier/AddCropFilter
Add 'crop' filter
2017-11-02 17:37:55 -07:00
Karl Kroening
0d1123801a Merge remote-tracking branch 'origin/master' into AddCropFilter 2017-11-02 17:08:42 -07:00
Karl Kroening
7a44e54955 Merge remote-tracking branch 'origin/master' 2017-11-02 17:08:16 -07:00
Karl Kroening
b73b312e64
Merge pull request #33 from noahstier/FixTravisYml
Update ffmpeg build URL
2017-11-02 17:06:54 -07:00
Karl Kroening
86eccbb08b Freeze requirements 2017-11-02 17:06:28 -07:00
Karl Kroening
39741f49c3 travis: use generic directory name for ffmpeg 2017-11-01 00:25:21 -07:00
Noah Stier
38f2e703d2 update ffmpeg build dir name 2017-10-06 00:19:14 -07:00
Noah Stier
c4eae56495 Update ffmpeg build URL in more places 2017-10-06 00:15:08 -07:00
Noah Stier
39cbd53652 Update ffmpeg build URL 2017-10-06 00:09:56 -07:00
Noah Stier
00fb91a4c5 Add 'crop' filter 2017-10-05 23:50:38 -07:00
Karl Kroening
f8409d4397 Merge remote-tracking branch 'origin/feature/17' 2017-07-15 15:54:02 -06:00
Karl Kroening
9c1661f3d7 Merge pull request #21 from kkroening/feature/18
Add graph visualization
2017-07-12 02:14:03 -06:00
Karl Kroening
7669492575 Merge pull request #20 from kkroening/feature/17
Add support for multi-output filters; implement `split` filter
2017-07-12 02:13:35 -06:00
Karl Kroening
0d60de3fe9 Merge branch 'feature/17' into feature/18 2017-07-12 02:10:51 -06:00
Karl Kroening
4640adabe0 Futurize 2017-07-12 02:10:44 -06:00
Karl Kroening
35cd113da0 Merge branch 'feature/17' into feature/18 2017-07-12 02:09:24 -06:00
Karl Kroening
7b2d8b63fc Merge remote-tracking branch 'origin/master' into feature/17
Conflicts:
	ffmpeg/_filters.py
	ffmpeg/_utils.py
	ffmpeg/nodes.py
	ffmpeg/tests/test_ffmpeg.py
2017-07-12 02:08:19 -06:00
Karl Kroening
17e9e460d6 Merge pull request #25 from Depaulicious/escapeargs
Escape terminator characters in filter arguments
2017-07-12 02:00:39 -06:00
Karl Kroening
fbf01f24ab Remove extra sample data 2017-07-12 01:19:28 -06:00
Karl Kroening
c19dcaca30 Fix drawbox=>drawtext 2017-07-12 01:18:48 -06:00
Karl Kroening
5f4bfd1fb3 Robustly handle string escaping 2017-07-12 00:33:43 -06:00
Karl Kroening
cf1b7bfd4b #17: auto-generate split output count 2017-07-10 23:38:30 -06:00
Davide Depau
efc0104ae4 escape "[]=;:," characters in filter arguments to avoid early termination 2017-07-10 22:15:52 +02:00
Karl Kroening
2d6b0d4730 #18: use get_stream_spec_nodes in view 2017-07-09 16:00:41 -06:00
Karl Kroening
4af484feee Merge branch 'feature/17' into feature/18 2017-07-09 15:52:41 -06:00
Karl Kroening
5d78a2595d #17: fix merge_outputs; allow stream_spec in get_args+run 2017-07-09 15:50:51 -06:00
Karl Kroening
13d9e2c3fa Merge branch 'feature/17' into feature/18 2017-07-06 03:42:09 -06:00
Karl Kroening
c6e2f05e5b #17: add short_repr for input and output nodes 2017-07-06 03:42:03 -06:00
Karl Kroening
a8b1cb63f2 #18: use short_repr in view 2017-07-06 03:41:42 -06:00
Karl Kroening
0b1259238f #18: re-add __init__.py changes 2017-07-06 03:40:56 -06:00
Karl Kroening
8a761f055d #18: have view operator return stream_spec 2017-07-06 03:35:50 -06:00
Karl Kroening
0d63a9e801 Merge branch 'feature/17' into feature/18 2017-07-06 03:35:29 -06:00
Karl Kroening
8337d34b82 #17: add proper handling of split operator 2017-07-06 03:35:20 -06:00
Karl Kroening
662c56eb5b #17: fix python 3 support 2017-07-06 02:50:22 -06:00
Karl Kroening
b548092d48 #17: fix __init__ changes 2017-07-06 02:49:05 -06:00
Karl Kroening
2d512994ff #18: use better arrow 2017-07-06 02:41:57 -06:00
Karl Kroening
39466beb62 Merge branch 'feature/17' into feature/18 2017-07-06 02:37:40 -06:00
Karl Kroening
b7fc331722 Add split operator 2017-07-06 02:32:45 -06:00
Karl Kroening
543cd1b4e3 #18: improve edge labelling 2017-07-06 02:32:30 -06:00
Karl Kroening
1955547202 #18: fix to use latest #17 changes 2017-07-06 02:24:40 -06:00
Karl Kroening
656d9fa7f7 Merge branch 'feature/17' into feature/18 2017-07-06 02:23:55 -06:00
Karl Kroening
6887ad8bac Massive refactor to break nodes into streams+nodes 2017-07-06 02:23:13 -06:00
Karl Kroening
37c2094a9c Rename graph.py to _view.py; handle graphviz import errors; use tempfile 2017-07-05 22:30:00 -06:00
Karl Kroening
7613f746d2 Merge branch 'feature/17' into feature/18 2017-07-05 04:52:55 -06:00
Karl Kroening
aa5156d9c9 #17: allow multiple outgoing edges with same label 2017-07-05 04:52:45 -06:00
Karl Kroening
241ede2271 #18: add initial graph.py 2017-07-05 04:23:35 -06:00
Karl Kroening
11a24d0432 #17: fix get_outgoing_edges 2017-07-05 04:23:05 -06:00
Karl Kroening
fc07f6c4fa #17: remove Node._parents 2017-07-05 04:07:30 -06:00
Karl Kroening
7236984626 #17: don't rely on in 2017-07-05 03:31:29 -06:00
Karl Kroening
677967b4c2 Merge remote-tracking branch 'origin/master' into feature/17
Conflicts:
	ffmpeg/_run.py
	ffmpeg/nodes.py
2017-07-05 03:19:43 -06:00
Karl Kroening
a986cbe9fb Merge remote-tracking branch 'origin/master' 2017-07-05 03:14:30 -06:00
Karl Kroening
6a9a12e718 #17: move graph stuff to dag.py; add edge labelling 2017-07-05 03:13:30 -06:00
Karl Kroening
fc946be164 Pull in _NodeBase from actorgraph; include short-hash in repr 2017-07-04 17:45:20 -06:00
Karl Kroening
89f91535c6 Merge pull request #18 from Depaulicious/assert-remove
Convert assertions to if+raise
2017-07-02 14:14:19 -06:00
Davide Depau
086613bb09 convert assertions to if+raise 2017-06-30 15:54:09 +02:00
Karl Kroening
dcebe917a7 Update docs 2017-06-29 00:17:01 -06:00
Karl Kroening
d4b8900646 Merge pull request #16 from kkroening/cleanup-hashing
Cleanup hashing
2017-06-17 01:19:04 -06:00
Karl Kroening
f1e6212765 Clean up hashing 2017-06-17 00:14:18 -06:00
Karl Kroening
1439c885e0 Merge pull request #15 from Depaulicious/patch-1
Add docstring to `merge_output`
2017-06-16 22:55:24 -06:00
Karl Kroening
e3a846b312 Remove .python-version 2017-06-16 15:53:43 -06:00
Davide Depau
31f19e184e add docstring to merge_output 2017-06-16 15:13:52 +02:00
Karl Kroening
b002e8c31b Update setup.py 2017-06-15 23:13:03 -06:00
74 changed files with 5661 additions and 2289 deletions

45
.github/workflows/ci.yml vendored Normal file
View File

@ -0,0 +1,45 @@
name: CI
on:
- push
- pull_request
jobs:
test:
runs-on: ubuntu-20.04
strategy:
fail-fast: false
matrix:
python-version:
- "2.7"
- "3.5"
- "3.6"
- "3.7"
- "3.8"
- "3.9"
- "3.10"
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install ffmpeg
run: |
sudo apt update
sudo apt install ffmpeg
- name: Setup pip + tox
run: |
python -m pip install --upgrade \
"pip==20.3.4; python_version < '3.6'" \
"pip==21.3.1; python_version >= '3.6'"
python -m pip install tox==3.24.5 tox-gh-actions==2.9.1
- name: Test with tox
run: tox
black:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- name: Black
run: |
# TODO: use standard `psf/black` action after dropping Python 2 support.
pip install black==21.12b0 click==8.0.2 # https://stackoverflow.com/questions/71673404
black ffmpeg --check --color --diff

3
.gitignore vendored
View File

@ -2,6 +2,7 @@
.eggs
.tox/
dist/
ffmpeg/tests/sample_data/dummy2.mp4
ffmpeg/tests/sample_data/out*.mp4
ffmpeg_python.egg-info/
venv*
build/

View File

@ -1,5 +0,0 @@
3.3.6
3.4.6
3.5.3
3.6.1
jython-2.7.0

View File

@ -1,36 +0,0 @@
language: python
before_install:
- >
[ -f ffmpeg-3.3.1-64bit-static/ffmpeg ] || (
curl -O https://johnvansickle.com/ffmpeg/releases/ffmpeg-3.3.1-64bit-static.tar.xz &&
tar Jxf ffmpeg-3.3.1-64bit-static.tar.xz
)
matrix:
include:
- python: 2.7
env:
- TOX_ENV=py27
- python: 3.3
env:
- TOX_ENV=py33
- python: 3.4
env:
- TOX_ENV=py34
- python: 3.5
env:
- TOX_ENV=py35
- python: 3.6
env:
- TOX_ENV=py36
- python: pypy
env:
- TOX_ENV=pypy
install:
- pip install tox
script:
- export PATH=$(readlink -f ffmpeg-3.3.1-64bit-static):$PATH
- tox -e $TOX_ENV
cache:
directories:
- .tox
- ffmpeg-3.3.1-64bit-static

208
LICENSE
View File

@ -1,13 +1,201 @@
Copyright 2017 Karl Kroening
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
http://www.apache.org/licenses/LICENSE-2.0
1. Definitions.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2017 Karl Kroening
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

237
README.md
View File

@ -1,16 +1,21 @@
# ffmpeg-python: Python bindings for FFmpeg
[![Build status](https://travis-ci.org/kkroening/ffmpeg-python.svg?branch=master)](https://travis-ci.org/kkroening/ffmpeg-python)
[![CI][ci-badge]][ci]
[ci-badge]: https://github.com/kkroening/ffmpeg-python/actions/workflows/ci.yml/badge.svg
[ci]: https://github.com/kkroening/ffmpeg-python/actions/workflows/ci.yml
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/doc/formula.png" alt="ffmpeg-python logo" width="60%" />
## Overview
There are tons of Python FFmpeg wrappers out there but they seem to lack complex filter support. `ffmpeg-python` works well for simple as well as complex signal graphs.
## Quickstart
Flip a video horizontally:
```
```python
import ffmpeg
stream = ffmpeg.input('input.mp4')
stream = ffmpeg.hflip(stream)
@ -19,9 +24,10 @@ ffmpeg.run(stream)
```
Or if you prefer a fluent interface:
```
```python
import ffmpeg
(ffmpeg
(
ffmpeg
.input('input.mp4')
.hflip()
.output('output.mp4')
@ -29,114 +35,253 @@ import ffmpeg
)
```
## [API reference](https://kkroening.github.io/ffmpeg-python/)
## Complex filter graphs
FFmpeg is extremely powerful, but its command-line interface gets really complicated really quickly - especially when working with signal graphs and doing anything more than trivial.
FFmpeg is extremely powerful, but its command-line interface gets really complicated rather quickly - especially when working with signal graphs and doing anything more than trivial.
Take for example a signal graph that looks like this:
![Signal graph](https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/doc/graph1.png)
The corresponding command-line arguments are pretty gnarly:
```
ffmpeg -i input.mp4 \
-filter_complex "\
[0]trim=start_frame=10:end_frame=20[v0];\
[0]trim=start_frame=30:end_frame=40[v1];\
[v0][v1]concat=n=2[v2];\
[1]hflip[v3];\
[v2][v3]overlay=eof_action=repeat[v4];\
[v4]drawbox=50:50:120:120:red:t=5[v5]"\
-map [v5] output.mp4
```bash
ffmpeg -i input.mp4 -i overlay.png -filter_complex "[0]trim=start_frame=10:end_frame=20[v0];\
[0]trim=start_frame=30:end_frame=40[v1];[v0][v1]concat=n=2[v2];[1]hflip[v3];\
[v2][v3]overlay=eof_action=repeat[v4];[v4]drawbox=50:50:120:120:red:t=5[v5]"\
-map [v5] output.mp4
```
Maybe this looks great to you, but if you're not an FFmpeg command-line expert, it probably looks alien.
If you're like me and find Python to be powerful and readable, it's easy with `ffmpeg-python`:
```
If you're like me and find Python to be powerful and readable, it's easier with `ffmpeg-python`:
```python
import ffmpeg
in_file = ffmpeg.input('input.mp4')
overlay_file = ffmpeg.input('overlay.png')
(ffmpeg
(
ffmpeg
.concat(
in_file.trim(start_frame=10, end_frame=20),
in_file.trim(start_frame=30, end_frame=40),
)
.overlay(overlay_file.hflip())
.drawbox(50, 50, 120, 120, color='red', thickness=5)
.output(TEST_OUTPUT_FILE)
.output('out.mp4')
.run()
)
```
`ffmpeg-python` takes care of running `ffmpeg` with the command-line arguments that correspond to the above filter diagram, and it's easy to see what's going on and make changes as needed.
`ffmpeg-python` takes care of running `ffmpeg` with the command-line arguments that correspond to the above filter diagram, in familiar Python terms.
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/doc/screenshot.png" alt="Screenshot" align="middle" width="60%" />
Real-world signal graphs can get a heck of a lot more complex, but `ffmpeg-python` handles them with ease.
Real-world signal graphs can get a heck of a lot more complex, but `ffmpeg-python` handles arbitrarily large (directed-acyclic) signal graphs.
## Installation
The easiest way to acquire the latest version of `ffmpeg-python` is through pip:
### Installing `ffmpeg-python`
```
The latest version of `ffmpeg-python` can be acquired via a typical pip install:
```bash
pip install ffmpeg-python
```
It's also possible to clone the source and put it on your python path (`$PYTHONPATH`, `sys.path`, etc.):
```
> git clone git@github.com:kkroening/ffmpeg-python.git
> export PYTHONPATH=${PYTHONPATH}:ffmpeg-python
> python
>>> import ffmpeg
Or the source can be cloned and installed from locally:
```bash
git clone git@github.com:kkroening/ffmpeg-python.git
pip install -e ./ffmpeg-python
```
## [API Reference](https://kkroening.github.io/ffmpeg-python/)
> **Note**: `ffmpeg-python` makes no attempt to download/install FFmpeg, as `ffmpeg-python` is merely a pure-Python wrapper - whereas FFmpeg installation is platform-dependent/environment-specific, and is thus the responsibility of the user, as described below.
API documentation is automatically generated from python docstrings and hosted on github pages: https://kkroening.github.io/ffmpeg-python/
### Installing FFmpeg
Before using `ffmpeg-python`, FFmpeg must be installed and accessible via the `$PATH` environment variable.
There are a variety of ways to install FFmpeg, such as the [official download links](https://ffmpeg.org/download.html), or using your package manager of choice (e.g. `sudo apt install ffmpeg` on Debian/Ubuntu, `brew install ffmpeg` on OS X, etc.).
Regardless of how FFmpeg is installed, you can check if your environment path is set correctly by running the `ffmpeg` command from the terminal, in which case the version information should appear, as in the following example (truncated for brevity):
Alternatively, standard python help is available, such as at the python REPL prompt as follows:
```
import ffmpeg
help(ffmpeg)
$ ffmpeg
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)
```
> **Note**: The actual version information displayed here may vary from one system to another; but if a message such as `ffmpeg: command not found` appears instead of the version information, FFmpeg is not properly installed.
## [Examples](https://github.com/kkroening/ffmpeg-python/tree/master/examples)
When in doubt, take a look at the [examples](https://github.com/kkroening/ffmpeg-python/tree/master/examples) to see if there's something that's close to whatever you're trying to do.
Here are a few:
- [Convert video to numpy array](https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md#convert-video-to-numpy-array)
- [Generate thumbnail for video](https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md#generate-thumbnail-for-video)
- [Read raw PCM audio via pipe](https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md#convert-sound-to-raw-pcm-audio)
- [JupyterLab/Notebook stream editor](https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md#jupyter-stream-editor)
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/doc/jupyter-demo.gif" alt="jupyter demo" width="75%" />
- [Tensorflow/DeepDream streaming](https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md#tensorflow-streaming)
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/examples/graphs/dream.png" alt="deep dream streaming" width="40%" />
See the [Examples README](https://github.com/kkroening/ffmpeg-python/tree/master/examples) for additional examples.
## Custom Filters
Don't see the filter you're looking for? `ffmpeg-python` is a work in progress, but it's easy to use any arbitrary ffmpeg filter:
```
Don't see the filter you're looking for? While `ffmpeg-python` includes shorthand notation for some of the most commonly used filters (such as `concat`), all filters can be referenced via the `.filter` operator:
```python
stream = ffmpeg.input('dummy.mp4')
stream = ffmpeg.filter_(stream, 'fps', fps=25, round='up')
stream = ffmpeg.filter(stream, 'fps', fps=25, round='up')
stream = ffmpeg.output(stream, 'dummy2.mp4')
ffmpeg.run(stream)
```
Or fluently:
```
(ffmpeg
```python
(
ffmpeg
.input('dummy.mp4')
.filter_('fps', fps=25, round='up')
.filter('fps', fps=25, round='up')
.output('dummy2.mp4')
.run()
)
```
When in doubt, refer to the [existing filters](https://github.com/kkroening/ffmpeg-python/blob/master/ffmpeg/_filters.py) and/or the [official ffmpeg documentation](https://ffmpeg.org/ffmpeg-filters.html).
**Special option names:**
Arguments with special names such as `-qscale:v` (variable bitrate), `-b:v` (constant bitrate), etc. can be specified as a keyword-args dictionary as follows:
```python
(
ffmpeg
.input('in.mp4')
.output('out.mp4', **{'qscale:v': 3})
.run()
)
```
**Multiple inputs:**
Filters that take multiple input streams can be used by passing the input streams as an array to `ffmpeg.filter`:
```python
main = ffmpeg.input('main.mp4')
logo = ffmpeg.input('logo.png')
(
ffmpeg
.filter([main, logo], 'overlay', 10, 10)
.output('out.mp4')
.run()
)
```
**Multiple outputs:**
Filters that produce multiple outputs can be used with `.filter_multi_output`:
```python
split = (
ffmpeg
.input('in.mp4')
.filter_multi_output('split') # or `.split()`
)
(
ffmpeg
.concat(split[0], split[1].reverse())
.output('out.mp4')
.run()
)
```
(In this particular case, `.split()` is the equivalent shorthand, but the general approach works for other multi-output filters)
**String expressions:**
Expressions to be interpreted by ffmpeg can be included as string parameters and reference any special ffmpeg variable names:
```python
(
ffmpeg
.input('in.mp4')
.filter('crop', 'in_w-2*10', 'in_h-2*20')
.input('out.mp4')
)
```
<br />
When in doubt, refer to the [existing filters](https://github.com/kkroening/ffmpeg-python/blob/master/ffmpeg/_filters.py), [examples](https://github.com/kkroening/ffmpeg-python/tree/master/examples), and/or the [official ffmpeg documentation](https://ffmpeg.org/ffmpeg-filters.html).
## Frequently asked questions
**Why do I get an import/attribute/etc. error from `import ffmpeg`?**
Make sure you ran `pip install ffmpeg-python` and _**not**_ `pip install ffmpeg` (wrong) or `pip install python-ffmpeg` (also wrong).
**Why did my audio stream get dropped?**
Some ffmpeg filters drop audio streams, and care must be taken to preserve the audio in the final output. The ``.audio`` and ``.video`` operators can be used to reference the audio/video portions of a stream so that they can be processed separately and then re-combined later in the pipeline.
This dilemma is intrinsic to ffmpeg, and ffmpeg-python tries to stay out of the way while users may refer to the official ffmpeg documentation as to why certain filters drop audio.
As usual, take a look at the [examples](https://github.com/kkroening/ffmpeg-python/tree/master/examples#audiovideo-pipeline) (*Audio/video pipeline* in particular).
**How can I find out the used command line arguments?**
You can run `stream.get_args()` before `stream.run()` to retrieve the command line arguments that will be passed to `ffmpeg`. You can also run `stream.compile()` that also includes the `ffmpeg` executable as the first argument.
**How do I do XYZ?**
Take a look at each of the links in the [Additional Resources](https://kkroening.github.io/ffmpeg-python/) section at the end of this README. If you look everywhere and can't find what you're looking for and have a question that may be relevant to other users, you may open an issue asking how to do it, while providing a thorough explanation of what you're trying to do and what you've tried so far.
Issues not directly related to `ffmpeg-python` or issues asking others to write your code for you or how to do the work of solving a complex signal processing problem for you that's not relevant to other users will be closed.
That said, we hope to continue improving our documentation and provide a community of support for people using `ffmpeg-python` to do cool and exciting things.
## Contributing
Feel free to report any bugs or feature requests.
<img align="right" src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/doc/logo.png" alt="ffmpeg-python logo" width="20%" />
It should be fairly easy to use filters that aren't explicitly built into `ffmpeg-python` but if there's a feature or filter you'd really like to see included in the library, don't hesitate to open a feature request.
One of the best things you can do to help make `ffmpeg-python` better is to answer [open questions](https://github.com/kkroening/ffmpeg-python/labels/question) in the issue tracker. The questions that are answered will be tagged and incorporated into the documentation, examples, and other learning resources.
Pull requests are welcome as well.
If you notice things that could be better in the documentation or overall development experience, please say so in the [issue tracker](https://github.com/kkroening/ffmpeg-python/issues). And of course, feel free to report any bugs or submit feature requests.
Pull requests are welcome as well, but it wouldn't hurt to touch base in the issue tracker or hop on the [Matrix chat channel](https://riot.im/app/#/room/#ffmpeg-python:matrix.org) first.
Anyone who fixes any of the [open bugs](https://github.com/kkroening/ffmpeg-python/labels/bug) or implements [requested enhancements](https://github.com/kkroening/ffmpeg-python/labels/enhancement) is a hero, but changes should include passing tests.
### Running tests
```bash
git clone git@github.com:kkroening/ffmpeg-python.git
cd ffmpeg-python
virtualenv venv
. venv/bin/activate # (OS X / Linux)
venv\bin\activate # (Windows)
pip install -e .[dev]
pytest
```
<br />
### Special thanks
- [Fabrice Bellard](https://bellard.org/)
- [The FFmpeg team](https://ffmpeg.org/donations.html)
- [Arne de Laat](https://github.com/153957)
- [Davide Depau](https://github.com/depau)
- [Dim](https://github.com/lloti)
- [Noah Stier](https://github.com/noahstier)
## Additional Resources
- [API Reference](https://kkroening.github.io/ffmpeg-python/)
- [Examples](https://github.com/kkroening/ffmpeg-python/tree/master/examples)
- [Filters](https://github.com/kkroening/ffmpeg-python/blob/master/ffmpeg/_filters.py)
- [Tests](https://github.com/kkroening/ffmpeg-python/blob/master/ffmpeg/tests/test_ffmpeg.py)
- [FFmpeg Homepage](https://ffmpeg.org/)
- [FFmpeg Documentation](https://ffmpeg.org/ffmpeg.html)
- [FFmpeg Filters Documentation](https://ffmpeg.org/ffmpeg-filters.html)
- [Test cases](https://github.com/kkroening/ffmpeg-python/blob/master/ffmpeg/tests/test_ffmpeg.py)
- [Issue tracker](https://github.com/kkroening/ffmpeg-python/issues)
- Matrix Chat: [#ffmpeg-python:matrix.org](https://riot.im/app/#/room/#ffmpeg-python:matrix.org)

BIN
doc/formula.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

BIN
doc/formula.xcf Normal file

Binary file not shown.

View File

@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: d3019c15b90af9d4beabe6f0fbc238a9
config: f3635c9edf6e9bff1735d57d26069ada
tags: 645f666f9bcd5a90fca523b33c5a78b7

Binary file not shown.

Before

Width:  |  Height:  |  Size: 673 B

View File

@ -4,7 +4,7 @@
*
* Sphinx stylesheet -- basic theme.
*
* :copyright: Copyright 2007-2017 by the Sphinx team, see AUTHORS.
* :copyright: Copyright 2007-2019 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
@ -81,10 +81,26 @@ div.sphinxsidebar input {
font-size: 1em;
}
div.sphinxsidebar #searchbox input[type="text"] {
width: 170px;
div.sphinxsidebar #searchbox form.search {
overflow: hidden;
}
div.sphinxsidebar #searchbox input[type="text"] {
float: left;
width: 80%;
padding: 0.25em;
box-sizing: border-box;
}
div.sphinxsidebar #searchbox input[type="submit"] {
float: left;
width: 20%;
border-left: none;
padding: 0.25em;
box-sizing: border-box;
}
img {
border: 0;
max-width: 100%;
@ -199,6 +215,11 @@ table.modindextable td {
/* -- general body styles --------------------------------------------------- */
div.body {
min-width: 450px;
max-width: 800px;
}
div.body p, div.body dd, div.body li, div.body blockquote {
-moz-hyphens: auto;
-ms-hyphens: auto;
@ -210,6 +231,16 @@ a.headerlink {
visibility: hidden;
}
a.brackets:before,
span.brackets > a:before{
content: "[";
}
a.brackets:after,
span.brackets > a:after {
content: "]";
}
h1:hover > a.headerlink,
h2:hover > a.headerlink,
h3:hover > a.headerlink,
@ -258,6 +289,12 @@ img.align-center, .figure.align-center, object.align-center {
margin-right: auto;
}
img.align-default, .figure.align-default {
display: block;
margin-left: auto;
margin-right: auto;
}
.align-left {
text-align: left;
}
@ -266,6 +303,10 @@ img.align-center, .figure.align-center, object.align-center {
text-align: center;
}
.align-default {
text-align: center;
}
.align-right {
text-align: right;
}
@ -332,6 +373,16 @@ table.docutils {
border-collapse: collapse;
}
table.align-center {
margin-left: auto;
margin-right: auto;
}
table.align-default {
margin-left: auto;
margin-right: auto;
}
table caption span.caption-number {
font-style: italic;
}
@ -365,6 +416,16 @@ table.citation td {
border-bottom: none;
}
th > p:first-child,
td > p:first-child {
margin-top: 0px;
}
th > p:last-child,
td > p:last-child {
margin-bottom: 0px;
}
/* -- figures --------------------------------------------------------------- */
div.figure {
@ -405,6 +466,13 @@ table.field-list td, table.field-list th {
hyphens: manual;
}
/* -- hlist styles ---------------------------------------------------------- */
table.hlist td {
vertical-align: top;
}
/* -- other body styles ----------------------------------------------------- */
ol.arabic {
@ -427,11 +495,57 @@ ol.upperroman {
list-style: upper-roman;
}
li > p:first-child {
margin-top: 0px;
}
li > p:last-child {
margin-bottom: 0px;
}
dl.footnote > dt,
dl.citation > dt {
float: left;
}
dl.footnote > dd,
dl.citation > dd {
margin-bottom: 0em;
}
dl.footnote > dd:after,
dl.citation > dd:after {
content: "";
clear: both;
}
dl.field-list {
display: flex;
flex-wrap: wrap;
}
dl.field-list > dt {
flex-basis: 20%;
font-weight: bold;
word-break: break-word;
}
dl.field-list > dt:after {
content: ":";
}
dl.field-list > dd {
flex-basis: 70%;
padding-left: 1em;
margin-left: 0em;
margin-bottom: 0em;
}
dl {
margin-bottom: 15px;
}
dd p {
dd > p:first-child {
margin-top: 0px;
}
@ -445,10 +559,14 @@ dd {
margin-left: 30px;
}
dt:target, .highlighted {
dt:target, span.highlighted {
background-color: #fbe54e;
}
rect.highlighted {
fill: #fbe54e;
}
dl.glossary dt {
font-weight: bold;
font-size: 1.1em;
@ -500,6 +618,12 @@ dl.glossary dt {
font-style: oblique;
}
.classifier:before {
font-style: normal;
margin: 0.5em;
content: ":";
}
abbr, acronym {
border-bottom: dotted 1px;
cursor: help;

Binary file not shown.

Before

Width:  |  Height:  |  Size: 756 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 829 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 641 B

View File

@ -4,7 +4,7 @@
*
* Sphinx JavaScript utilities for all documentation.
*
* :copyright: Copyright 2007-2017 by the Sphinx team, see AUTHORS.
* :copyright: Copyright 2007-2019 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
@ -45,7 +45,7 @@ jQuery.urlencode = encodeURIComponent;
* it will always return arrays of strings for the value parts.
*/
jQuery.getQueryParameters = function(s) {
if (typeof s == 'undefined')
if (typeof s === 'undefined')
s = document.location.search;
var parts = s.substr(s.indexOf('?') + 1).split('&');
var result = {};
@ -66,29 +66,54 @@ jQuery.getQueryParameters = function(s) {
* span elements with the given class name.
*/
jQuery.fn.highlightText = function(text, className) {
function highlight(node) {
if (node.nodeType == 3) {
function highlight(node, addItems) {
if (node.nodeType === 3) {
var val = node.nodeValue;
var pos = val.toLowerCase().indexOf(text);
if (pos >= 0 && !jQuery(node.parentNode).hasClass(className)) {
var span = document.createElement("span");
span.className = className;
if (pos >= 0 &&
!jQuery(node.parentNode).hasClass(className) &&
!jQuery(node.parentNode).hasClass("nohighlight")) {
var span;
var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg");
if (isInSVG) {
span = document.createElementNS("http://www.w3.org/2000/svg", "tspan");
} else {
span = document.createElement("span");
span.className = className;
}
span.appendChild(document.createTextNode(val.substr(pos, text.length)));
node.parentNode.insertBefore(span, node.parentNode.insertBefore(
document.createTextNode(val.substr(pos + text.length)),
node.nextSibling));
node.nodeValue = val.substr(0, pos);
if (isInSVG) {
var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect");
var bbox = node.parentElement.getBBox();
rect.x.baseVal.value = bbox.x;
rect.y.baseVal.value = bbox.y;
rect.width.baseVal.value = bbox.width;
rect.height.baseVal.value = bbox.height;
rect.setAttribute('class', className);
addItems.push({
"parent": node.parentNode,
"target": rect});
}
}
}
else if (!jQuery(node).is("button, select, textarea")) {
jQuery.each(node.childNodes, function() {
highlight(this);
highlight(this, addItems);
});
}
}
return this.each(function() {
highlight(this);
var addItems = [];
var result = this.each(function() {
highlight(this, addItems);
});
for (var i = 0; i < addItems.length; ++i) {
jQuery(addItems[i].parent).before(addItems[i].target);
}
return result;
};
/*
@ -124,28 +149,30 @@ var Documentation = {
this.fixFirefoxAnchorBug();
this.highlightSearchWords();
this.initIndexTable();
if (DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) {
this.initOnKeyListeners();
}
},
/**
* i18n support
*/
TRANSLATIONS : {},
PLURAL_EXPR : function(n) { return n == 1 ? 0 : 1; },
PLURAL_EXPR : function(n) { return n === 1 ? 0 : 1; },
LOCALE : 'unknown',
// gettext and ngettext don't access this so that the functions
// can safely bound to a different name (_ = Documentation.gettext)
gettext : function(string) {
var translated = Documentation.TRANSLATIONS[string];
if (typeof translated == 'undefined')
if (typeof translated === 'undefined')
return string;
return (typeof translated == 'string') ? translated : translated[0];
return (typeof translated === 'string') ? translated : translated[0];
},
ngettext : function(singular, plural, n) {
var translated = Documentation.TRANSLATIONS[singular];
if (typeof translated == 'undefined')
if (typeof translated === 'undefined')
return (n == 1) ? singular : plural;
return translated[Documentation.PLURALEXPR(n)];
},
@ -180,7 +207,7 @@ var Documentation = {
* see: https://bugzilla.mozilla.org/show_bug.cgi?id=645075
*/
fixFirefoxAnchorBug : function() {
if (document.location.hash)
if (document.location.hash && $.browser.mozilla)
window.setTimeout(function() {
document.location.href += '';
}, 10);
@ -216,7 +243,7 @@ var Documentation = {
var src = $(this).attr('src');
var idnum = $(this).attr('id').substr(7);
$('tr.cg-' + idnum).toggle();
if (src.substr(-9) == 'minus.png')
if (src.substr(-9) === 'minus.png')
$(this).attr('src', src.substr(0, src.length-9) + 'plus.png');
else
$(this).attr('src', src.substr(0, src.length-8) + 'minus.png');
@ -248,7 +275,7 @@ var Documentation = {
var path = document.location.pathname;
var parts = path.split(/\//);
$.each(DOCUMENTATION_OPTIONS.URL_ROOT.split(/\//), function() {
if (this == '..')
if (this === '..')
parts.pop();
});
var url = parts.join('/');
@ -284,4 +311,4 @@ _ = Documentation.gettext;
$(document).ready(function() {
Documentation.init();
});
});

View File

@ -0,0 +1,10 @@
var DOCUMENTATION_OPTIONS = {
URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'),
VERSION: '',
LANGUAGE: 'None',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: '.txt',
NAVIGATION_WITH_KEYS: false
};

Binary file not shown.

Before

Width:  |  Height:  |  Size: 222 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 202 B

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,297 @@
/*
* language_data.js
* ~~~~~~~~~~~~~~~~
*
* This script contains the language-specific data used by searchtools.js,
* namely the list of stopwords, stemmer, scorer and splitter.
*
* :copyright: Copyright 2007-2019 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
var stopwords = ["a","and","are","as","at","be","but","by","for","if","in","into","is","it","near","no","not","of","on","or","such","that","the","their","then","there","these","they","this","to","was","will","with"];
/* Non-minified version JS is _stemmer.js if file is provided */
/**
* Porter Stemmer
*/
var Stemmer = function() {
var step2list = {
ational: 'ate',
tional: 'tion',
enci: 'ence',
anci: 'ance',
izer: 'ize',
bli: 'ble',
alli: 'al',
entli: 'ent',
eli: 'e',
ousli: 'ous',
ization: 'ize',
ation: 'ate',
ator: 'ate',
alism: 'al',
iveness: 'ive',
fulness: 'ful',
ousness: 'ous',
aliti: 'al',
iviti: 'ive',
biliti: 'ble',
logi: 'log'
};
var step3list = {
icate: 'ic',
ative: '',
alize: 'al',
iciti: 'ic',
ical: 'ic',
ful: '',
ness: ''
};
var c = "[^aeiou]"; // consonant
var v = "[aeiouy]"; // vowel
var C = c + "[^aeiouy]*"; // consonant sequence
var V = v + "[aeiou]*"; // vowel sequence
var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0
var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1
var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1
var s_v = "^(" + C + ")?" + v; // vowel in stem
this.stemWord = function (w) {
var stem;
var suffix;
var firstch;
var origword = w;
if (w.length < 3)
return w;
var re;
var re2;
var re3;
var re4;
firstch = w.substr(0,1);
if (firstch == "y")
w = firstch.toUpperCase() + w.substr(1);
// Step 1a
re = /^(.+?)(ss|i)es$/;
re2 = /^(.+?)([^s])s$/;
if (re.test(w))
w = w.replace(re,"$1$2");
else if (re2.test(w))
w = w.replace(re2,"$1$2");
// Step 1b
re = /^(.+?)eed$/;
re2 = /^(.+?)(ed|ing)$/;
if (re.test(w)) {
var fp = re.exec(w);
re = new RegExp(mgr0);
if (re.test(fp[1])) {
re = /.$/;
w = w.replace(re,"");
}
}
else if (re2.test(w)) {
var fp = re2.exec(w);
stem = fp[1];
re2 = new RegExp(s_v);
if (re2.test(stem)) {
w = stem;
re2 = /(at|bl|iz)$/;
re3 = new RegExp("([^aeiouylsz])\\1$");
re4 = new RegExp("^" + C + v + "[^aeiouwxy]$");
if (re2.test(w))
w = w + "e";
else if (re3.test(w)) {
re = /.$/;
w = w.replace(re,"");
}
else if (re4.test(w))
w = w + "e";
}
}
// Step 1c
re = /^(.+?)y$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(s_v);
if (re.test(stem))
w = stem + "i";
}
// Step 2
re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
suffix = fp[2];
re = new RegExp(mgr0);
if (re.test(stem))
w = stem + step2list[suffix];
}
// Step 3
re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
suffix = fp[2];
re = new RegExp(mgr0);
if (re.test(stem))
w = stem + step3list[suffix];
}
// Step 4
re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;
re2 = /^(.+?)(s|t)(ion)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(mgr1);
if (re.test(stem))
w = stem;
}
else if (re2.test(w)) {
var fp = re2.exec(w);
stem = fp[1] + fp[2];
re2 = new RegExp(mgr1);
if (re2.test(stem))
w = stem;
}
// Step 5
re = /^(.+?)e$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(mgr1);
re2 = new RegExp(meq1);
re3 = new RegExp("^" + C + v + "[^aeiouwxy]$");
if (re.test(stem) || (re2.test(stem) && !(re3.test(stem))))
w = stem;
}
re = /ll$/;
re2 = new RegExp(mgr1);
if (re.test(w) && re2.test(w)) {
re = /.$/;
w = w.replace(re,"");
}
// and turn initial Y back to y
if (firstch == "y")
w = firstch.toLowerCase() + w.substr(1);
return w;
}
}
var splitChars = (function() {
var result = {};
var singles = [96, 180, 187, 191, 215, 247, 749, 885, 903, 907, 909, 930, 1014, 1648,
1748, 1809, 2416, 2473, 2481, 2526, 2601, 2609, 2612, 2615, 2653, 2702,
2706, 2729, 2737, 2740, 2857, 2865, 2868, 2910, 2928, 2948, 2961, 2971,
2973, 3085, 3089, 3113, 3124, 3213, 3217, 3241, 3252, 3295, 3341, 3345,
3369, 3506, 3516, 3633, 3715, 3721, 3736, 3744, 3748, 3750, 3756, 3761,
3781, 3912, 4239, 4347, 4681, 4695, 4697, 4745, 4785, 4799, 4801, 4823,
4881, 5760, 5901, 5997, 6313, 7405, 8024, 8026, 8028, 8030, 8117, 8125,
8133, 8181, 8468, 8485, 8487, 8489, 8494, 8527, 11311, 11359, 11687, 11695,
11703, 11711, 11719, 11727, 11735, 12448, 12539, 43010, 43014, 43019, 43587,
43696, 43713, 64286, 64297, 64311, 64317, 64319, 64322, 64325, 65141];
var i, j, start, end;
for (i = 0; i < singles.length; i++) {
result[singles[i]] = true;
}
var ranges = [[0, 47], [58, 64], [91, 94], [123, 169], [171, 177], [182, 184], [706, 709],
[722, 735], [741, 747], [751, 879], [888, 889], [894, 901], [1154, 1161],
[1318, 1328], [1367, 1368], [1370, 1376], [1416, 1487], [1515, 1519], [1523, 1568],
[1611, 1631], [1642, 1645], [1750, 1764], [1767, 1773], [1789, 1790], [1792, 1807],
[1840, 1868], [1958, 1968], [1970, 1983], [2027, 2035], [2038, 2041], [2043, 2047],
[2070, 2073], [2075, 2083], [2085, 2087], [2089, 2307], [2362, 2364], [2366, 2383],
[2385, 2391], [2402, 2405], [2419, 2424], [2432, 2436], [2445, 2446], [2449, 2450],
[2483, 2485], [2490, 2492], [2494, 2509], [2511, 2523], [2530, 2533], [2546, 2547],
[2554, 2564], [2571, 2574], [2577, 2578], [2618, 2648], [2655, 2661], [2672, 2673],
[2677, 2692], [2746, 2748], [2750, 2767], [2769, 2783], [2786, 2789], [2800, 2820],
[2829, 2830], [2833, 2834], [2874, 2876], [2878, 2907], [2914, 2917], [2930, 2946],
[2955, 2957], [2966, 2968], [2976, 2978], [2981, 2983], [2987, 2989], [3002, 3023],
[3025, 3045], [3059, 3076], [3130, 3132], [3134, 3159], [3162, 3167], [3170, 3173],
[3184, 3191], [3199, 3204], [3258, 3260], [3262, 3293], [3298, 3301], [3312, 3332],
[3386, 3388], [3390, 3423], [3426, 3429], [3446, 3449], [3456, 3460], [3479, 3481],
[3518, 3519], [3527, 3584], [3636, 3647], [3655, 3663], [3674, 3712], [3717, 3718],
[3723, 3724], [3726, 3731], [3752, 3753], [3764, 3772], [3774, 3775], [3783, 3791],
[3802, 3803], [3806, 3839], [3841, 3871], [3892, 3903], [3949, 3975], [3980, 4095],
[4139, 4158], [4170, 4175], [4182, 4185], [4190, 4192], [4194, 4196], [4199, 4205],
[4209, 4212], [4226, 4237], [4250, 4255], [4294, 4303], [4349, 4351], [4686, 4687],
[4702, 4703], [4750, 4751], [4790, 4791], [4806, 4807], [4886, 4887], [4955, 4968],
[4989, 4991], [5008, 5023], [5109, 5120], [5741, 5742], [5787, 5791], [5867, 5869],
[5873, 5887], [5906, 5919], [5938, 5951], [5970, 5983], [6001, 6015], [6068, 6102],
[6104, 6107], [6109, 6111], [6122, 6127], [6138, 6159], [6170, 6175], [6264, 6271],
[6315, 6319], [6390, 6399], [6429, 6469], [6510, 6511], [6517, 6527], [6572, 6592],
[6600, 6607], [6619, 6655], [6679, 6687], [6741, 6783], [6794, 6799], [6810, 6822],
[6824, 6916], [6964, 6980], [6988, 6991], [7002, 7042], [7073, 7085], [7098, 7167],
[7204, 7231], [7242, 7244], [7294, 7400], [7410, 7423], [7616, 7679], [7958, 7959],
[7966, 7967], [8006, 8007], [8014, 8015], [8062, 8063], [8127, 8129], [8141, 8143],
[8148, 8149], [8156, 8159], [8173, 8177], [8189, 8303], [8306, 8307], [8314, 8318],
[8330, 8335], [8341, 8449], [8451, 8454], [8456, 8457], [8470, 8472], [8478, 8483],
[8506, 8507], [8512, 8516], [8522, 8525], [8586, 9311], [9372, 9449], [9472, 10101],
[10132, 11263], [11493, 11498], [11503, 11516], [11518, 11519], [11558, 11567],
[11622, 11630], [11632, 11647], [11671, 11679], [11743, 11822], [11824, 12292],
[12296, 12320], [12330, 12336], [12342, 12343], [12349, 12352], [12439, 12444],
[12544, 12548], [12590, 12592], [12687, 12689], [12694, 12703], [12728, 12783],
[12800, 12831], [12842, 12880], [12896, 12927], [12938, 12976], [12992, 13311],
[19894, 19967], [40908, 40959], [42125, 42191], [42238, 42239], [42509, 42511],
[42540, 42559], [42592, 42593], [42607, 42622], [42648, 42655], [42736, 42774],
[42784, 42785], [42889, 42890], [42893, 43002], [43043, 43055], [43062, 43071],
[43124, 43137], [43188, 43215], [43226, 43249], [43256, 43258], [43260, 43263],
[43302, 43311], [43335, 43359], [43389, 43395], [43443, 43470], [43482, 43519],
[43561, 43583], [43596, 43599], [43610, 43615], [43639, 43641], [43643, 43647],
[43698, 43700], [43703, 43704], [43710, 43711], [43715, 43738], [43742, 43967],
[44003, 44015], [44026, 44031], [55204, 55215], [55239, 55242], [55292, 55295],
[57344, 63743], [64046, 64047], [64110, 64111], [64218, 64255], [64263, 64274],
[64280, 64284], [64434, 64466], [64830, 64847], [64912, 64913], [64968, 65007],
[65020, 65135], [65277, 65295], [65306, 65312], [65339, 65344], [65371, 65381],
[65471, 65473], [65480, 65481], [65488, 65489], [65496, 65497]];
for (i = 0; i < ranges.length; i++) {
start = ranges[i][0];
end = ranges[i][1];
for (j = start; j <= end; j++) {
result[j] = true;
}
}
return result;
})();
function splitQuery(query) {
var result = [];
var start = -1;
for (var i = 0; i < query.length; i++) {
if (splitChars[query.charCodeAt(i)]) {
if (start !== -1) {
result.push(query.slice(start, i));
start = -1;
}
} else if (start === -1) {
start = i;
}
}
if (start !== -1) {
result.push(query.slice(start));
}
return result;
}

View File

@ -4,7 +4,7 @@
*
* Sphinx stylesheet -- nature theme.
*
* :copyright: Copyright 2007-2017 by the Sphinx team, see AUTHORS.
* :copyright: Copyright 2007-2019 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
@ -16,7 +16,7 @@
body {
font-family: Arial, sans-serif;
font-size: 100%;
background-color: #111;
background-color: #fff;
color: #555;
margin: 0;
padding: 0;
@ -125,14 +125,11 @@ div.sphinxsidebar input {
font-size: 1em;
}
div.sphinxsidebar input[type=text]{
div.sphinxsidebar .searchformwrapper {
margin-left: 20px;
margin-right: 20px;
}
div.sphinxsidebar input[type=submit]{
margin-left: 20px;
}
/* -- body styles ----------------------------------------------------------- */
a {

View File

@ -1,331 +1,54 @@
/*
* searchtools.js_t
* searchtools.js
* ~~~~~~~~~~~~~~~~
*
* Sphinx JavaScript utilities for the full-text search.
*
* :copyright: Copyright 2007-2017 by the Sphinx team, see AUTHORS.
* :copyright: Copyright 2007-2019 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
if (!Scorer) {
/**
* Simple result scoring code.
*/
var Scorer = {
// Implement the following function to further tweak the score for each result
// The function takes a result array [filename, title, anchor, descr, score]
// and returns the new score.
/*
score: function(result) {
return result[4];
},
*/
/* Non-minified version JS is _stemmer.js if file is provided */
/**
* Porter Stemmer
*/
var Stemmer = function() {
// query matches the full name of an object
objNameMatch: 11,
// or matches in the last dotted part of the object name
objPartialMatch: 6,
// Additive scores depending on the priority of the object
objPrio: {0: 15, // used to be importantResults
1: 5, // used to be objectResults
2: -5}, // used to be unimportantResults
// Used when the priority is not in the mapping.
objPrioDefault: 0,
var step2list = {
ational: 'ate',
tional: 'tion',
enci: 'ence',
anci: 'ance',
izer: 'ize',
bli: 'ble',
alli: 'al',
entli: 'ent',
eli: 'e',
ousli: 'ous',
ization: 'ize',
ation: 'ate',
ator: 'ate',
alism: 'al',
iveness: 'ive',
fulness: 'ful',
ousness: 'ous',
aliti: 'al',
iviti: 'ive',
biliti: 'ble',
logi: 'log'
// query found in title
title: 15,
partialTitle: 7,
// query found in terms
term: 5,
partialTerm: 2
};
}
var step3list = {
icate: 'ic',
ative: '',
alize: 'al',
iciti: 'ic',
ical: 'ic',
ful: '',
ness: ''
};
var c = "[^aeiou]"; // consonant
var v = "[aeiouy]"; // vowel
var C = c + "[^aeiouy]*"; // consonant sequence
var V = v + "[aeiou]*"; // vowel sequence
var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0
var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1
var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1
var s_v = "^(" + C + ")?" + v; // vowel in stem
this.stemWord = function (w) {
var stem;
var suffix;
var firstch;
var origword = w;
if (w.length < 3)
return w;
var re;
var re2;
var re3;
var re4;
firstch = w.substr(0,1);
if (firstch == "y")
w = firstch.toUpperCase() + w.substr(1);
// Step 1a
re = /^(.+?)(ss|i)es$/;
re2 = /^(.+?)([^s])s$/;
if (re.test(w))
w = w.replace(re,"$1$2");
else if (re2.test(w))
w = w.replace(re2,"$1$2");
// Step 1b
re = /^(.+?)eed$/;
re2 = /^(.+?)(ed|ing)$/;
if (re.test(w)) {
var fp = re.exec(w);
re = new RegExp(mgr0);
if (re.test(fp[1])) {
re = /.$/;
w = w.replace(re,"");
}
}
else if (re2.test(w)) {
var fp = re2.exec(w);
stem = fp[1];
re2 = new RegExp(s_v);
if (re2.test(stem)) {
w = stem;
re2 = /(at|bl|iz)$/;
re3 = new RegExp("([^aeiouylsz])\\1$");
re4 = new RegExp("^" + C + v + "[^aeiouwxy]$");
if (re2.test(w))
w = w + "e";
else if (re3.test(w)) {
re = /.$/;
w = w.replace(re,"");
}
else if (re4.test(w))
w = w + "e";
}
}
// Step 1c
re = /^(.+?)y$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(s_v);
if (re.test(stem))
w = stem + "i";
}
// Step 2
re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
suffix = fp[2];
re = new RegExp(mgr0);
if (re.test(stem))
w = stem + step2list[suffix];
}
// Step 3
re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
suffix = fp[2];
re = new RegExp(mgr0);
if (re.test(stem))
w = stem + step3list[suffix];
}
// Step 4
re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;
re2 = /^(.+?)(s|t)(ion)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(mgr1);
if (re.test(stem))
w = stem;
}
else if (re2.test(w)) {
var fp = re2.exec(w);
stem = fp[1] + fp[2];
re2 = new RegExp(mgr1);
if (re2.test(stem))
w = stem;
}
// Step 5
re = /^(.+?)e$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(mgr1);
re2 = new RegExp(meq1);
re3 = new RegExp("^" + C + v + "[^aeiouwxy]$");
if (re.test(stem) || (re2.test(stem) && !(re3.test(stem))))
w = stem;
}
re = /ll$/;
re2 = new RegExp(mgr1);
if (re.test(w) && re2.test(w)) {
re = /.$/;
w = w.replace(re,"");
}
// and turn initial Y back to y
if (firstch == "y")
w = firstch.toLowerCase() + w.substr(1);
return w;
if (!splitQuery) {
function splitQuery(query) {
return query.split(/\s+/);
}
}
/**
* Simple result scoring code.
*/
var Scorer = {
// Implement the following function to further tweak the score for each result
// The function takes a result array [filename, title, anchor, descr, score]
// and returns the new score.
/*
score: function(result) {
return result[4];
},
*/
// query matches the full name of an object
objNameMatch: 11,
// or matches in the last dotted part of the object name
objPartialMatch: 6,
// Additive scores depending on the priority of the object
objPrio: {0: 15, // used to be importantResults
1: 5, // used to be objectResults
2: -5}, // used to be unimportantResults
// Used when the priority is not in the mapping.
objPrioDefault: 0,
// query found in title
title: 15,
// query found in terms
term: 5
};
var splitChars = (function() {
var result = {};
var singles = [96, 180, 187, 191, 215, 247, 749, 885, 903, 907, 909, 930, 1014, 1648,
1748, 1809, 2416, 2473, 2481, 2526, 2601, 2609, 2612, 2615, 2653, 2702,
2706, 2729, 2737, 2740, 2857, 2865, 2868, 2910, 2928, 2948, 2961, 2971,
2973, 3085, 3089, 3113, 3124, 3213, 3217, 3241, 3252, 3295, 3341, 3345,
3369, 3506, 3516, 3633, 3715, 3721, 3736, 3744, 3748, 3750, 3756, 3761,
3781, 3912, 4239, 4347, 4681, 4695, 4697, 4745, 4785, 4799, 4801, 4823,
4881, 5760, 5901, 5997, 6313, 7405, 8024, 8026, 8028, 8030, 8117, 8125,
8133, 8181, 8468, 8485, 8487, 8489, 8494, 8527, 11311, 11359, 11687, 11695,
11703, 11711, 11719, 11727, 11735, 12448, 12539, 43010, 43014, 43019, 43587,
43696, 43713, 64286, 64297, 64311, 64317, 64319, 64322, 64325, 65141];
var i, j, start, end;
for (i = 0; i < singles.length; i++) {
result[singles[i]] = true;
}
var ranges = [[0, 47], [58, 64], [91, 94], [123, 169], [171, 177], [182, 184], [706, 709],
[722, 735], [741, 747], [751, 879], [888, 889], [894, 901], [1154, 1161],
[1318, 1328], [1367, 1368], [1370, 1376], [1416, 1487], [1515, 1519], [1523, 1568],
[1611, 1631], [1642, 1645], [1750, 1764], [1767, 1773], [1789, 1790], [1792, 1807],
[1840, 1868], [1958, 1968], [1970, 1983], [2027, 2035], [2038, 2041], [2043, 2047],
[2070, 2073], [2075, 2083], [2085, 2087], [2089, 2307], [2362, 2364], [2366, 2383],
[2385, 2391], [2402, 2405], [2419, 2424], [2432, 2436], [2445, 2446], [2449, 2450],
[2483, 2485], [2490, 2492], [2494, 2509], [2511, 2523], [2530, 2533], [2546, 2547],
[2554, 2564], [2571, 2574], [2577, 2578], [2618, 2648], [2655, 2661], [2672, 2673],
[2677, 2692], [2746, 2748], [2750, 2767], [2769, 2783], [2786, 2789], [2800, 2820],
[2829, 2830], [2833, 2834], [2874, 2876], [2878, 2907], [2914, 2917], [2930, 2946],
[2955, 2957], [2966, 2968], [2976, 2978], [2981, 2983], [2987, 2989], [3002, 3023],
[3025, 3045], [3059, 3076], [3130, 3132], [3134, 3159], [3162, 3167], [3170, 3173],
[3184, 3191], [3199, 3204], [3258, 3260], [3262, 3293], [3298, 3301], [3312, 3332],
[3386, 3388], [3390, 3423], [3426, 3429], [3446, 3449], [3456, 3460], [3479, 3481],
[3518, 3519], [3527, 3584], [3636, 3647], [3655, 3663], [3674, 3712], [3717, 3718],
[3723, 3724], [3726, 3731], [3752, 3753], [3764, 3772], [3774, 3775], [3783, 3791],
[3802, 3803], [3806, 3839], [3841, 3871], [3892, 3903], [3949, 3975], [3980, 4095],
[4139, 4158], [4170, 4175], [4182, 4185], [4190, 4192], [4194, 4196], [4199, 4205],
[4209, 4212], [4226, 4237], [4250, 4255], [4294, 4303], [4349, 4351], [4686, 4687],
[4702, 4703], [4750, 4751], [4790, 4791], [4806, 4807], [4886, 4887], [4955, 4968],
[4989, 4991], [5008, 5023], [5109, 5120], [5741, 5742], [5787, 5791], [5867, 5869],
[5873, 5887], [5906, 5919], [5938, 5951], [5970, 5983], [6001, 6015], [6068, 6102],
[6104, 6107], [6109, 6111], [6122, 6127], [6138, 6159], [6170, 6175], [6264, 6271],
[6315, 6319], [6390, 6399], [6429, 6469], [6510, 6511], [6517, 6527], [6572, 6592],
[6600, 6607], [6619, 6655], [6679, 6687], [6741, 6783], [6794, 6799], [6810, 6822],
[6824, 6916], [6964, 6980], [6988, 6991], [7002, 7042], [7073, 7085], [7098, 7167],
[7204, 7231], [7242, 7244], [7294, 7400], [7410, 7423], [7616, 7679], [7958, 7959],
[7966, 7967], [8006, 8007], [8014, 8015], [8062, 8063], [8127, 8129], [8141, 8143],
[8148, 8149], [8156, 8159], [8173, 8177], [8189, 8303], [8306, 8307], [8314, 8318],
[8330, 8335], [8341, 8449], [8451, 8454], [8456, 8457], [8470, 8472], [8478, 8483],
[8506, 8507], [8512, 8516], [8522, 8525], [8586, 9311], [9372, 9449], [9472, 10101],
[10132, 11263], [11493, 11498], [11503, 11516], [11518, 11519], [11558, 11567],
[11622, 11630], [11632, 11647], [11671, 11679], [11743, 11822], [11824, 12292],
[12296, 12320], [12330, 12336], [12342, 12343], [12349, 12352], [12439, 12444],
[12544, 12548], [12590, 12592], [12687, 12689], [12694, 12703], [12728, 12783],
[12800, 12831], [12842, 12880], [12896, 12927], [12938, 12976], [12992, 13311],
[19894, 19967], [40908, 40959], [42125, 42191], [42238, 42239], [42509, 42511],
[42540, 42559], [42592, 42593], [42607, 42622], [42648, 42655], [42736, 42774],
[42784, 42785], [42889, 42890], [42893, 43002], [43043, 43055], [43062, 43071],
[43124, 43137], [43188, 43215], [43226, 43249], [43256, 43258], [43260, 43263],
[43302, 43311], [43335, 43359], [43389, 43395], [43443, 43470], [43482, 43519],
[43561, 43583], [43596, 43599], [43610, 43615], [43639, 43641], [43643, 43647],
[43698, 43700], [43703, 43704], [43710, 43711], [43715, 43738], [43742, 43967],
[44003, 44015], [44026, 44031], [55204, 55215], [55239, 55242], [55292, 55295],
[57344, 63743], [64046, 64047], [64110, 64111], [64218, 64255], [64263, 64274],
[64280, 64284], [64434, 64466], [64830, 64847], [64912, 64913], [64968, 65007],
[65020, 65135], [65277, 65295], [65306, 65312], [65339, 65344], [65371, 65381],
[65471, 65473], [65480, 65481], [65488, 65489], [65496, 65497]];
for (i = 0; i < ranges.length; i++) {
start = ranges[i][0];
end = ranges[i][1];
for (j = start; j <= end; j++) {
result[j] = true;
}
}
return result;
})();
function splitQuery(query) {
var result = [];
var start = -1;
for (var i = 0; i < query.length; i++) {
if (splitChars[query.charCodeAt(i)]) {
if (start !== -1) {
result.push(query.slice(start, i));
start = -1;
}
} else if (start === -1) {
start = i;
}
}
if (start !== -1) {
result.push(query.slice(start));
}
return result;
}
/**
* Search Module
*/
@ -335,6 +58,14 @@ var Search = {
_queued_query : null,
_pulse_status : -1,
htmlToText : function(htmlString) {
var htmlElement = document.createElement('span');
htmlElement.innerHTML = htmlString;
$(htmlElement).find('.headerlink').remove();
docContent = $(htmlElement).find('[role=main]')[0];
return docContent.textContent || docContent.innerText;
},
init : function() {
var params = $.getQueryParameters();
if (params.q) {
@ -399,7 +130,7 @@ var Search = {
this.out = $('#search-results');
this.title = $('<h2>' + _('Searching') + '</h2>').appendTo(this.out);
this.dots = $('<span></span>').appendTo(this.title);
this.status = $('<p style="display: none"></p>').appendTo(this.out);
this.status = $('<p class="search-summary">&nbsp;</p>').appendTo(this.out);
this.output = $('<ul class="search"/>').appendTo(this.out);
$('#search-progress').text(_('Preparing search...'));
@ -417,7 +148,6 @@ var Search = {
*/
query : function(query) {
var i;
var stopwords = ["a","and","are","as","at","be","but","by","for","if","in","into","is","it","near","no","not","of","on","or","such","that","the","their","then","there","these","they","this","to","was","will","with"];
// stem the searchterms and add them to the correct list
var stemmer = new Stemmer();
@ -539,8 +269,7 @@ var Search = {
displayNextItem();
});
} else if (DOCUMENTATION_OPTIONS.HAS_SOURCE) {
var suffix = DOCUMENTATION_OPTIONS.SOURCELINK_SUFFIX;
$.ajax({url: DOCUMENTATION_OPTIONS.URL_ROOT + '_sources/' + item[5] + (item[5].slice(-suffix.length) === suffix ? '' : suffix),
$.ajax({url: DOCUMENTATION_OPTIONS.URL_ROOT + item[0] + DOCUMENTATION_OPTIONS.FILE_SUFFIX,
dataType: "text",
complete: function(jqxhr, textstatus) {
var data = jqxhr.responseText;
@ -590,12 +319,13 @@ var Search = {
for (var prefix in objects) {
for (var name in objects[prefix]) {
var fullname = (prefix ? prefix + '.' : '') + name;
if (fullname.toLowerCase().indexOf(object) > -1) {
var fullnameLower = fullname.toLowerCase()
if (fullnameLower.indexOf(object) > -1) {
var score = 0;
var parts = fullname.split('.');
var parts = fullnameLower.split('.');
// check for different match types: exact matches of full name or
// "last name" (i.e. last dotted part)
if (fullname == object || parts[parts.length - 1] == object) {
if (fullnameLower == object || parts[parts.length - 1] == object) {
score += Scorer.objNameMatch;
// matches in last name
} else if (parts[parts.length - 1].indexOf(object) > -1) {
@ -662,6 +392,19 @@ var Search = {
{files: terms[word], score: Scorer.term},
{files: titleterms[word], score: Scorer.title}
];
// add support for partial matches
if (word.length > 2) {
for (var w in terms) {
if (w.match(word) && !terms[word]) {
_o.push({files: terms[w], score: Scorer.partialTerm})
}
}
for (var w in titleterms) {
if (w.match(word) && !titleterms[word]) {
_o.push({files: titleterms[w], score: Scorer.partialTitle})
}
}
}
// no match but word was a required one
if ($u.every(_o, function(o){return o.files === undefined;})) {
@ -701,8 +444,12 @@ var Search = {
var valid = true;
// check if all requirements are matched
if (fileMap[file].length != searchterms.length)
continue;
var filteredTermCount = // as search terms with length < 3 are discarded: ignore
searchterms.filter(function(term){return term.length > 2}).length
if (
fileMap[file].length != searchterms.length &&
fileMap[file].length != filteredTermCount
) continue;
// ensure that none of the excluded terms is in the search result
for (i = 0; i < excluded.length; i++) {
@ -733,7 +480,8 @@ var Search = {
* words. the first one is used to find the occurrence, the
* latter for highlighting it.
*/
makeSearchSummary : function(text, keywords, hlwords) {
makeSearchSummary : function(htmlText, keywords, hlwords) {
var text = Search.htmlToText(htmlText);
var textLower = text.toLowerCase();
var start = 0;
$.each(keywords, function() {
@ -755,4 +503,4 @@ var Search = {
$(document).ready(function() {
Search.init();
});
});

Binary file not shown.

Before

Width:  |  Height:  |  Size: 214 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 203 B

View File

@ -1,808 +0,0 @@
/*
* websupport.js
* ~~~~~~~~~~~~~
*
* sphinx.websupport utilities for all documentation.
*
* :copyright: Copyright 2007-2017 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
(function($) {
$.fn.autogrow = function() {
return this.each(function() {
var textarea = this;
$.fn.autogrow.resize(textarea);
$(textarea)
.focus(function() {
textarea.interval = setInterval(function() {
$.fn.autogrow.resize(textarea);
}, 500);
})
.blur(function() {
clearInterval(textarea.interval);
});
});
};
$.fn.autogrow.resize = function(textarea) {
var lineHeight = parseInt($(textarea).css('line-height'), 10);
var lines = textarea.value.split('\n');
var columns = textarea.cols;
var lineCount = 0;
$.each(lines, function() {
lineCount += Math.ceil(this.length / columns) || 1;
});
var height = lineHeight * (lineCount + 1);
$(textarea).css('height', height);
};
})(jQuery);
(function($) {
var comp, by;
function init() {
initEvents();
initComparator();
}
function initEvents() {
$(document).on("click", 'a.comment-close', function(event) {
event.preventDefault();
hide($(this).attr('id').substring(2));
});
$(document).on("click", 'a.vote', function(event) {
event.preventDefault();
handleVote($(this));
});
$(document).on("click", 'a.reply', function(event) {
event.preventDefault();
openReply($(this).attr('id').substring(2));
});
$(document).on("click", 'a.close-reply', function(event) {
event.preventDefault();
closeReply($(this).attr('id').substring(2));
});
$(document).on("click", 'a.sort-option', function(event) {
event.preventDefault();
handleReSort($(this));
});
$(document).on("click", 'a.show-proposal', function(event) {
event.preventDefault();
showProposal($(this).attr('id').substring(2));
});
$(document).on("click", 'a.hide-proposal', function(event) {
event.preventDefault();
hideProposal($(this).attr('id').substring(2));
});
$(document).on("click", 'a.show-propose-change', function(event) {
event.preventDefault();
showProposeChange($(this).attr('id').substring(2));
});
$(document).on("click", 'a.hide-propose-change', function(event) {
event.preventDefault();
hideProposeChange($(this).attr('id').substring(2));
});
$(document).on("click", 'a.accept-comment', function(event) {
event.preventDefault();
acceptComment($(this).attr('id').substring(2));
});
$(document).on("click", 'a.delete-comment', function(event) {
event.preventDefault();
deleteComment($(this).attr('id').substring(2));
});
$(document).on("click", 'a.comment-markup', function(event) {
event.preventDefault();
toggleCommentMarkupBox($(this).attr('id').substring(2));
});
}
/**
* Set comp, which is a comparator function used for sorting and
* inserting comments into the list.
*/
function setComparator() {
// If the first three letters are "asc", sort in ascending order
// and remove the prefix.
if (by.substring(0,3) == 'asc') {
var i = by.substring(3);
comp = function(a, b) { return a[i] - b[i]; };
} else {
// Otherwise sort in descending order.
comp = function(a, b) { return b[by] - a[by]; };
}
// Reset link styles and format the selected sort option.
$('a.sel').attr('href', '#').removeClass('sel');
$('a.by' + by).removeAttr('href').addClass('sel');
}
/**
* Create a comp function. If the user has preferences stored in
* the sortBy cookie, use those, otherwise use the default.
*/
function initComparator() {
by = 'rating'; // Default to sort by rating.
// If the sortBy cookie is set, use that instead.
if (document.cookie.length > 0) {
var start = document.cookie.indexOf('sortBy=');
if (start != -1) {
start = start + 7;
var end = document.cookie.indexOf(";", start);
if (end == -1) {
end = document.cookie.length;
by = unescape(document.cookie.substring(start, end));
}
}
}
setComparator();
}
/**
* Show a comment div.
*/
function show(id) {
$('#ao' + id).hide();
$('#ah' + id).show();
var context = $.extend({id: id}, opts);
var popup = $(renderTemplate(popupTemplate, context)).hide();
popup.find('textarea[name="proposal"]').hide();
popup.find('a.by' + by).addClass('sel');
var form = popup.find('#cf' + id);
form.submit(function(event) {
event.preventDefault();
addComment(form);
});
$('#s' + id).after(popup);
popup.slideDown('fast', function() {
getComments(id);
});
}
/**
* Hide a comment div.
*/
function hide(id) {
$('#ah' + id).hide();
$('#ao' + id).show();
var div = $('#sc' + id);
div.slideUp('fast', function() {
div.remove();
});
}
/**
* Perform an ajax request to get comments for a node
* and insert the comments into the comments tree.
*/
function getComments(id) {
$.ajax({
type: 'GET',
url: opts.getCommentsURL,
data: {node: id},
success: function(data, textStatus, request) {
var ul = $('#cl' + id);
var speed = 100;
$('#cf' + id)
.find('textarea[name="proposal"]')
.data('source', data.source);
if (data.comments.length === 0) {
ul.html('<li>No comments yet.</li>');
ul.data('empty', true);
} else {
// If there are comments, sort them and put them in the list.
var comments = sortComments(data.comments);
speed = data.comments.length * 100;
appendComments(comments, ul);
ul.data('empty', false);
}
$('#cn' + id).slideUp(speed + 200);
ul.slideDown(speed);
},
error: function(request, textStatus, error) {
showError('Oops, there was a problem retrieving the comments.');
},
dataType: 'json'
});
}
/**
* Add a comment via ajax and insert the comment into the comment tree.
*/
function addComment(form) {
var node_id = form.find('input[name="node"]').val();
var parent_id = form.find('input[name="parent"]').val();
var text = form.find('textarea[name="comment"]').val();
var proposal = form.find('textarea[name="proposal"]').val();
if (text == '') {
showError('Please enter a comment.');
return;
}
// Disable the form that is being submitted.
form.find('textarea,input').attr('disabled', 'disabled');
// Send the comment to the server.
$.ajax({
type: "POST",
url: opts.addCommentURL,
dataType: 'json',
data: {
node: node_id,
parent: parent_id,
text: text,
proposal: proposal
},
success: function(data, textStatus, error) {
// Reset the form.
if (node_id) {
hideProposeChange(node_id);
}
form.find('textarea')
.val('')
.add(form.find('input'))
.removeAttr('disabled');
var ul = $('#cl' + (node_id || parent_id));
if (ul.data('empty')) {
$(ul).empty();
ul.data('empty', false);
}
insertComment(data.comment);
var ao = $('#ao' + node_id);
ao.find('img').attr({'src': opts.commentBrightImage});
if (node_id) {
// if this was a "root" comment, remove the commenting box
// (the user can get it back by reopening the comment popup)
$('#ca' + node_id).slideUp();
}
},
error: function(request, textStatus, error) {
form.find('textarea,input').removeAttr('disabled');
showError('Oops, there was a problem adding the comment.');
}
});
}
/**
* Recursively append comments to the main comment list and children
* lists, creating the comment tree.
*/
function appendComments(comments, ul) {
$.each(comments, function() {
var div = createCommentDiv(this);
ul.append($(document.createElement('li')).html(div));
appendComments(this.children, div.find('ul.comment-children'));
// To avoid stagnating data, don't store the comments children in data.
this.children = null;
div.data('comment', this);
});
}
/**
* After adding a new comment, it must be inserted in the correct
* location in the comment tree.
*/
function insertComment(comment) {
var div = createCommentDiv(comment);
// To avoid stagnating data, don't store the comments children in data.
comment.children = null;
div.data('comment', comment);
var ul = $('#cl' + (comment.node || comment.parent));
var siblings = getChildren(ul);
var li = $(document.createElement('li'));
li.hide();
// Determine where in the parents children list to insert this comment.
for(i=0; i < siblings.length; i++) {
if (comp(comment, siblings[i]) <= 0) {
$('#cd' + siblings[i].id)
.parent()
.before(li.html(div));
li.slideDown('fast');
return;
}
}
// If we get here, this comment rates lower than all the others,
// or it is the only comment in the list.
ul.append(li.html(div));
li.slideDown('fast');
}
function acceptComment(id) {
$.ajax({
type: 'POST',
url: opts.acceptCommentURL,
data: {id: id},
success: function(data, textStatus, request) {
$('#cm' + id).fadeOut('fast');
$('#cd' + id).removeClass('moderate');
},
error: function(request, textStatus, error) {
showError('Oops, there was a problem accepting the comment.');
}
});
}
function deleteComment(id) {
$.ajax({
type: 'POST',
url: opts.deleteCommentURL,
data: {id: id},
success: function(data, textStatus, request) {
var div = $('#cd' + id);
if (data == 'delete') {
// Moderator mode: remove the comment and all children immediately
div.slideUp('fast', function() {
div.remove();
});
return;
}
// User mode: only mark the comment as deleted
div
.find('span.user-id:first')
.text('[deleted]').end()
.find('div.comment-text:first')
.text('[deleted]').end()
.find('#cm' + id + ', #dc' + id + ', #ac' + id + ', #rc' + id +
', #sp' + id + ', #hp' + id + ', #cr' + id + ', #rl' + id)
.remove();
var comment = div.data('comment');
comment.username = '[deleted]';
comment.text = '[deleted]';
div.data('comment', comment);
},
error: function(request, textStatus, error) {
showError('Oops, there was a problem deleting the comment.');
}
});
}
function showProposal(id) {
$('#sp' + id).hide();
$('#hp' + id).show();
$('#pr' + id).slideDown('fast');
}
function hideProposal(id) {
$('#hp' + id).hide();
$('#sp' + id).show();
$('#pr' + id).slideUp('fast');
}
function showProposeChange(id) {
$('#pc' + id).hide();
$('#hc' + id).show();
var textarea = $('#pt' + id);
textarea.val(textarea.data('source'));
$.fn.autogrow.resize(textarea[0]);
textarea.slideDown('fast');
}
function hideProposeChange(id) {
$('#hc' + id).hide();
$('#pc' + id).show();
var textarea = $('#pt' + id);
textarea.val('').removeAttr('disabled');
textarea.slideUp('fast');
}
function toggleCommentMarkupBox(id) {
$('#mb' + id).toggle();
}
/** Handle when the user clicks on a sort by link. */
function handleReSort(link) {
var classes = link.attr('class').split(/\s+/);
for (var i=0; i<classes.length; i++) {
if (classes[i] != 'sort-option') {
by = classes[i].substring(2);
}
}
setComparator();
// Save/update the sortBy cookie.
var expiration = new Date();
expiration.setDate(expiration.getDate() + 365);
document.cookie= 'sortBy=' + escape(by) +
';expires=' + expiration.toUTCString();
$('ul.comment-ul').each(function(index, ul) {
var comments = getChildren($(ul), true);
comments = sortComments(comments);
appendComments(comments, $(ul).empty());
});
}
/**
* Function to process a vote when a user clicks an arrow.
*/
function handleVote(link) {
if (!opts.voting) {
showError("You'll need to login to vote.");
return;
}
var id = link.attr('id');
if (!id) {
// Didn't click on one of the voting arrows.
return;
}
// If it is an unvote, the new vote value is 0,
// Otherwise it's 1 for an upvote, or -1 for a downvote.
var value = 0;
if (id.charAt(1) != 'u') {
value = id.charAt(0) == 'u' ? 1 : -1;
}
// The data to be sent to the server.
var d = {
comment_id: id.substring(2),
value: value
};
// Swap the vote and unvote links.
link.hide();
$('#' + id.charAt(0) + (id.charAt(1) == 'u' ? 'v' : 'u') + d.comment_id)
.show();
// The div the comment is displayed in.
var div = $('div#cd' + d.comment_id);
var data = div.data('comment');
// If this is not an unvote, and the other vote arrow has
// already been pressed, unpress it.
if ((d.value !== 0) && (data.vote === d.value * -1)) {
$('#' + (d.value == 1 ? 'd' : 'u') + 'u' + d.comment_id).hide();
$('#' + (d.value == 1 ? 'd' : 'u') + 'v' + d.comment_id).show();
}
// Update the comments rating in the local data.
data.rating += (data.vote === 0) ? d.value : (d.value - data.vote);
data.vote = d.value;
div.data('comment', data);
// Change the rating text.
div.find('.rating:first')
.text(data.rating + ' point' + (data.rating == 1 ? '' : 's'));
// Send the vote information to the server.
$.ajax({
type: "POST",
url: opts.processVoteURL,
data: d,
error: function(request, textStatus, error) {
showError('Oops, there was a problem casting that vote.');
}
});
}
/**
* Open a reply form used to reply to an existing comment.
*/
function openReply(id) {
// Swap out the reply link for the hide link
$('#rl' + id).hide();
$('#cr' + id).show();
// Add the reply li to the children ul.
var div = $(renderTemplate(replyTemplate, {id: id})).hide();
$('#cl' + id)
.prepend(div)
// Setup the submit handler for the reply form.
.find('#rf' + id)
.submit(function(event) {
event.preventDefault();
addComment($('#rf' + id));
closeReply(id);
})
.find('input[type=button]')
.click(function() {
closeReply(id);
});
div.slideDown('fast', function() {
$('#rf' + id).find('textarea').focus();
});
}
/**
* Close the reply form opened with openReply.
*/
function closeReply(id) {
// Remove the reply div from the DOM.
$('#rd' + id).slideUp('fast', function() {
$(this).remove();
});
// Swap out the hide link for the reply link
$('#cr' + id).hide();
$('#rl' + id).show();
}
/**
* Recursively sort a tree of comments using the comp comparator.
*/
function sortComments(comments) {
comments.sort(comp);
$.each(comments, function() {
this.children = sortComments(this.children);
});
return comments;
}
/**
* Get the children comments from a ul. If recursive is true,
* recursively include childrens' children.
*/
function getChildren(ul, recursive) {
var children = [];
ul.children().children("[id^='cd']")
.each(function() {
var comment = $(this).data('comment');
if (recursive)
comment.children = getChildren($(this).find('#cl' + comment.id), true);
children.push(comment);
});
return children;
}
/** Create a div to display a comment in. */
function createCommentDiv(comment) {
if (!comment.displayed && !opts.moderator) {
return $('<div class="moderate">Thank you! Your comment will show up '
+ 'once it is has been approved by a moderator.</div>');
}
// Prettify the comment rating.
comment.pretty_rating = comment.rating + ' point' +
(comment.rating == 1 ? '' : 's');
// Make a class (for displaying not yet moderated comments differently)
comment.css_class = comment.displayed ? '' : ' moderate';
// Create a div for this comment.
var context = $.extend({}, opts, comment);
var div = $(renderTemplate(commentTemplate, context));
// If the user has voted on this comment, highlight the correct arrow.
if (comment.vote) {
var direction = (comment.vote == 1) ? 'u' : 'd';
div.find('#' + direction + 'v' + comment.id).hide();
div.find('#' + direction + 'u' + comment.id).show();
}
if (opts.moderator || comment.text != '[deleted]') {
div.find('a.reply').show();
if (comment.proposal_diff)
div.find('#sp' + comment.id).show();
if (opts.moderator && !comment.displayed)
div.find('#cm' + comment.id).show();
if (opts.moderator || (opts.username == comment.username))
div.find('#dc' + comment.id).show();
}
return div;
}
/**
* A simple template renderer. Placeholders such as <%id%> are replaced
* by context['id'] with items being escaped. Placeholders such as <#id#>
* are not escaped.
*/
function renderTemplate(template, context) {
var esc = $(document.createElement('div'));
function handle(ph, escape) {
var cur = context;
$.each(ph.split('.'), function() {
cur = cur[this];
});
return escape ? esc.text(cur || "").html() : cur;
}
return template.replace(/<([%#])([\w\.]*)\1>/g, function() {
return handle(arguments[2], arguments[1] == '%' ? true : false);
});
}
/** Flash an error message briefly. */
function showError(message) {
$(document.createElement('div')).attr({'class': 'popup-error'})
.append($(document.createElement('div'))
.attr({'class': 'error-message'}).text(message))
.appendTo('body')
.fadeIn("slow")
.delay(2000)
.fadeOut("slow");
}
/** Add a link the user uses to open the comments popup. */
$.fn.comment = function() {
return this.each(function() {
var id = $(this).attr('id').substring(1);
var count = COMMENT_METADATA[id];
var title = count + ' comment' + (count == 1 ? '' : 's');
var image = count > 0 ? opts.commentBrightImage : opts.commentImage;
var addcls = count == 0 ? ' nocomment' : '';
$(this)
.append(
$(document.createElement('a')).attr({
href: '#',
'class': 'sphinx-comment-open' + addcls,
id: 'ao' + id
})
.append($(document.createElement('img')).attr({
src: image,
alt: 'comment',
title: title
}))
.click(function(event) {
event.preventDefault();
show($(this).attr('id').substring(2));
})
)
.append(
$(document.createElement('a')).attr({
href: '#',
'class': 'sphinx-comment-close hidden',
id: 'ah' + id
})
.append($(document.createElement('img')).attr({
src: opts.closeCommentImage,
alt: 'close',
title: 'close'
}))
.click(function(event) {
event.preventDefault();
hide($(this).attr('id').substring(2));
})
);
});
};
var opts = {
processVoteURL: '/_process_vote',
addCommentURL: '/_add_comment',
getCommentsURL: '/_get_comments',
acceptCommentURL: '/_accept_comment',
deleteCommentURL: '/_delete_comment',
commentImage: '/static/_static/comment.png',
closeCommentImage: '/static/_static/comment-close.png',
loadingImage: '/static/_static/ajax-loader.gif',
commentBrightImage: '/static/_static/comment-bright.png',
upArrow: '/static/_static/up.png',
downArrow: '/static/_static/down.png',
upArrowPressed: '/static/_static/up-pressed.png',
downArrowPressed: '/static/_static/down-pressed.png',
voting: false,
moderator: false
};
if (typeof COMMENT_OPTIONS != "undefined") {
opts = jQuery.extend(opts, COMMENT_OPTIONS);
}
var popupTemplate = '\
<div class="sphinx-comments" id="sc<%id%>">\
<p class="sort-options">\
Sort by:\
<a href="#" class="sort-option byrating">best rated</a>\
<a href="#" class="sort-option byascage">newest</a>\
<a href="#" class="sort-option byage">oldest</a>\
</p>\
<div class="comment-header">Comments</div>\
<div class="comment-loading" id="cn<%id%>">\
loading comments... <img src="<%loadingImage%>" alt="" /></div>\
<ul id="cl<%id%>" class="comment-ul"></ul>\
<div id="ca<%id%>">\
<p class="add-a-comment">Add a comment\
(<a href="#" class="comment-markup" id="ab<%id%>">markup</a>):</p>\
<div class="comment-markup-box" id="mb<%id%>">\
reStructured text markup: <i>*emph*</i>, <b>**strong**</b>, \
<code>``code``</code>, \
code blocks: <code>::</code> and an indented block after blank line</div>\
<form method="post" id="cf<%id%>" class="comment-form" action="">\
<textarea name="comment" cols="80"></textarea>\
<p class="propose-button">\
<a href="#" id="pc<%id%>" class="show-propose-change">\
Propose a change &#9657;\
</a>\
<a href="#" id="hc<%id%>" class="hide-propose-change">\
Propose a change &#9663;\
</a>\
</p>\
<textarea name="proposal" id="pt<%id%>" cols="80"\
spellcheck="false"></textarea>\
<input type="submit" value="Add comment" />\
<input type="hidden" name="node" value="<%id%>" />\
<input type="hidden" name="parent" value="" />\
</form>\
</div>\
</div>';
var commentTemplate = '\
<div id="cd<%id%>" class="sphinx-comment<%css_class%>">\
<div class="vote">\
<div class="arrow">\
<a href="#" id="uv<%id%>" class="vote" title="vote up">\
<img src="<%upArrow%>" />\
</a>\
<a href="#" id="uu<%id%>" class="un vote" title="vote up">\
<img src="<%upArrowPressed%>" />\
</a>\
</div>\
<div class="arrow">\
<a href="#" id="dv<%id%>" class="vote" title="vote down">\
<img src="<%downArrow%>" id="da<%id%>" />\
</a>\
<a href="#" id="du<%id%>" class="un vote" title="vote down">\
<img src="<%downArrowPressed%>" />\
</a>\
</div>\
</div>\
<div class="comment-content">\
<p class="tagline comment">\
<span class="user-id"><%username%></span>\
<span class="rating"><%pretty_rating%></span>\
<span class="delta"><%time.delta%></span>\
</p>\
<div class="comment-text comment"><#text#></div>\
<p class="comment-opts comment">\
<a href="#" class="reply hidden" id="rl<%id%>">reply &#9657;</a>\
<a href="#" class="close-reply" id="cr<%id%>">reply &#9663;</a>\
<a href="#" id="sp<%id%>" class="show-proposal">proposal &#9657;</a>\
<a href="#" id="hp<%id%>" class="hide-proposal">proposal &#9663;</a>\
<a href="#" id="dc<%id%>" class="delete-comment hidden">delete</a>\
<span id="cm<%id%>" class="moderation hidden">\
<a href="#" id="ac<%id%>" class="accept-comment">accept</a>\
</span>\
</p>\
<pre class="proposal" id="pr<%id%>">\
<#proposal_diff#>\
</pre>\
<ul class="comment-children" id="cl<%id%>"></ul>\
</div>\
<div class="clearleft"></div>\
</div>\
</div>';
var replyTemplate = '\
<li>\
<div class="reply-div" id="rd<%id%>">\
<form id="rf<%id%>">\
<textarea name="comment" cols="80"></textarea>\
<input type="submit" value="Add reply" />\
<input type="button" value="Cancel" />\
<input type="hidden" name="parent" value="<%id%>" />\
<input type="hidden" name="node" value="" />\
</form>\
</div>\
</li>';
$(document).ready(function() {
init();
});
})(jQuery);
$(document).ready(function() {
// add comment anchors for all paragraphs that are commentable
$('.sphinx-has-comment').comment();
// highlight search words in search results
$("div.context").each(function() {
var params = $.getQueryParameters();
var terms = (params.q) ? params.q[0].split(/\s+/) : [];
var result = $(this);
$.each(terms, function() {
result.highlightText(this.toLowerCase(), 'highlighted');
});
});
// directly open comment window if requested
var anchor = document.location.hash;
if (anchor.substring(0, 9) == '#comment-') {
$('#ao' + anchor.substring(9)).click();
document.location.hash = '#s' + anchor.substring(9);
}
});

View File

@ -1,34 +1,21 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta charset="utf-8" />
<title>Index &#8212; ffmpeg-python documentation</title>
<link rel="stylesheet" href="_static/nature.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: './',
VERSION: '',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: '.txt'
};
</script>
<script type="text/javascript" id="documentation_options" data-url_root="./" src="_static/documentation_options.js"></script>
<script type="text/javascript" src="_static/jquery.js"></script>
<script type="text/javascript" src="_static/underscore.js"></script>
<script type="text/javascript" src="_static/doctools.js"></script>
<script type="text/javascript" src="_static/language_data.js"></script>
<link rel="index" title="Index" href="#" />
<link rel="search" title="Search" href="search.html" />
</head>
<body>
</head><body>
<div class="related" role="navigation" aria-label="related navigation">
<h3>Navigation</h3>
<ul>
@ -51,14 +38,17 @@
<h1 id="index">Index</h1>
<div class="genindex-jumpbox">
<a href="#C"><strong>C</strong></a>
<a href="#A"><strong>A</strong></a>
| <a href="#C"><strong>C</strong></a>
| <a href="#D"><strong>D</strong></a>
| <a href="#E"><strong>E</strong></a>
| <a href="#F"><strong>F</strong></a>
| <a href="#G"><strong>G</strong></a>
| <a href="#H"><strong>H</strong></a>
| <a href="#I"><strong>I</strong></a>
| <a href="#M"><strong>M</strong></a>
| <a href="#O"><strong>O</strong></a>
| <a href="#P"><strong>P</strong></a>
| <a href="#R"><strong>R</strong></a>
| <a href="#S"><strong>S</strong></a>
| <a href="#T"><strong>T</strong></a>
@ -66,14 +56,26 @@
| <a href="#Z"><strong>Z</strong></a>
</div>
<h2 id="A">A</h2>
<table style="width: 100%" class="indextable genindextable"><tr>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#ffmpeg.Stream.audio">audio() (ffmpeg.Stream property)</a>
</li>
</ul></td>
</tr></table>
<h2 id="C">C</h2>
<table style="width: 100%" class="indextable genindextable"><tr>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#ffmpeg.colorchannelmixer">colorchannelmixer() (in module ffmpeg)</a>
</li>
<li><a href="index.html#ffmpeg.compile">compile() (in module ffmpeg)</a>
</li>
</ul></td>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#ffmpeg.concat">concat() (in module ffmpeg)</a>
</li>
<li><a href="index.html#ffmpeg.crop">crop() (in module ffmpeg)</a>
</li>
</ul></td>
</tr></table>
@ -82,6 +84,18 @@
<table style="width: 100%" class="indextable genindextable"><tr>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#ffmpeg.drawbox">drawbox() (in module ffmpeg)</a>
</li>
</ul></td>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#ffmpeg.drawtext">drawtext() (in module ffmpeg)</a>
</li>
</ul></td>
</tr></table>
<h2 id="E">E</h2>
<table style="width: 100%" class="indextable genindextable"><tr>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#ffmpeg.Error">Error</a>
</li>
</ul></td>
</tr></table>
@ -90,12 +104,14 @@
<table style="width: 100%" class="indextable genindextable"><tr>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#module-ffmpeg">ffmpeg (module)</a>
</li>
<li><a href="index.html#ffmpeg.filter">filter() (in module ffmpeg)</a>
</li>
</ul></td>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#ffmpeg.filter_">filter_() (in module ffmpeg)</a>
</li>
<li><a href="index.html#ffmpeg.filter_multi">filter_multi() (in module ffmpeg)</a>
<li><a href="index.html#ffmpeg.filter_multi_output">filter_multi_output() (in module ffmpeg)</a>
</li>
</ul></td>
</tr></table>
@ -150,10 +166,22 @@
</ul></td>
</tr></table>
<h2 id="P">P</h2>
<table style="width: 100%" class="indextable genindextable"><tr>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#ffmpeg.probe">probe() (in module ffmpeg)</a>
</li>
</ul></td>
</tr></table>
<h2 id="R">R</h2>
<table style="width: 100%" class="indextable genindextable"><tr>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#ffmpeg.run">run() (in module ffmpeg)</a>
</li>
</ul></td>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#ffmpeg.run_async">run_async() (in module ffmpeg)</a>
</li>
</ul></td>
</tr></table>
@ -162,6 +190,10 @@
<table style="width: 100%" class="indextable genindextable"><tr>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#ffmpeg.setpts">setpts() (in module ffmpeg)</a>
</li>
</ul></td>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#ffmpeg.Stream">Stream (class in ffmpeg)</a>
</li>
</ul></td>
</tr></table>
@ -179,6 +211,16 @@
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#ffmpeg.vflip">vflip() (in module ffmpeg)</a>
</li>
<li><a href="index.html#ffmpeg.Stream.video">video() (ffmpeg.Stream property)</a>
</li>
</ul></td>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#ffmpeg.Stream.view">view() (ffmpeg.Stream method)</a>
<ul>
<li><a href="index.html#ffmpeg.view">(in module ffmpeg)</a>
</li>
</ul></li>
</ul></td>
</tr></table>
@ -197,17 +239,14 @@
</div>
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper">
<div id="searchbox" style="display: none" role="search">
<h3>Quick search</h3>
<h3 id="searchlabel">Quick search</h3>
<div class="searchformwrapper">
<form class="search" action="search.html" method="get">
<div><input type="text" name="q" /></div>
<div><input type="submit" value="Go" /></div>
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
<input type="text" name="q" aria-labelledby="searchlabel" />
<input type="submit" value="Go" />
</form>
</div>
</div>
<script type="text/javascript">$('#searchbox').show(0);</script>
</div>
@ -228,7 +267,7 @@
</div>
<div class="footer" role="contentinfo">
&#169; Copyright 2017, Karl Kroening.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.6.1.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 2.1.0.
</div>
</body>
</html>

View File

@ -1,33 +1,20 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta charset="utf-8" />
<title>ffmpeg-python: Python bindings for FFmpeg &#8212; ffmpeg-python documentation</title>
<link rel="stylesheet" href="_static/nature.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: './',
VERSION: '',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: '.txt'
};
</script>
<script type="text/javascript" id="documentation_options" data-url_root="./" src="_static/documentation_options.js"></script>
<script type="text/javascript" src="_static/jquery.js"></script>
<script type="text/javascript" src="_static/underscore.js"></script>
<script type="text/javascript" src="_static/doctools.js"></script>
<script type="text/javascript" src="_static/language_data.js"></script>
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
</head>
<body>
</head><body>
<div class="related" role="navigation" aria-label="related navigation">
<h3>Navigation</h3>
<ul>
@ -48,37 +35,286 @@
<div class="section" id="ffmpeg-python-python-bindings-for-ffmpeg">
<h1>ffmpeg-python: Python bindings for FFmpeg<a class="headerlink" href="#ffmpeg-python-python-bindings-for-ffmpeg" title="Permalink to this headline"></a></h1>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Github:</th><td class="field-body"><a class="reference external" href="https://github.com/kkroening/ffmpeg-python">https://github.com/kkroening/ffmpeg-python</a></td>
</tr>
</tbody>
</table>
<dl class="field-list simple">
<dt class="field-odd">Github</dt>
<dd class="field-odd"><p><a class="reference external" href="https://github.com/kkroening/ffmpeg-python">https://github.com/kkroening/ffmpeg-python</a></p>
</dd>
</dl>
<div class="toctree-wrapper compound">
</div>
<span class="target" id="module-ffmpeg"></span><dl class="function">
<span class="target" id="module-ffmpeg"></span><dl class="class">
<dt id="ffmpeg.Stream">
<em class="property">class </em><code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">Stream</code><span class="sig-paren">(</span><em class="sig-param">upstream_node</em>, <em class="sig-param">upstream_label</em>, <em class="sig-param">node_types</em>, <em class="sig-param">upstream_selector=None</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.Stream" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">object</span></code></p>
<p>Represents the outgoing edge of an upstream node; may be used to create more downstream nodes.</p>
<dl class="method">
<dt id="ffmpeg.Stream.audio">
<em class="property">property </em><code class="sig-name descname">audio</code><a class="headerlink" href="#ffmpeg.Stream.audio" title="Permalink to this definition"></a></dt>
<dd><p>Select the audio-portion of a stream.</p>
<p>Some ffmpeg filters drop audio streams, and care must be taken
to preserve the audio in the final output. The <code class="docutils literal notranslate"><span class="pre">.audio</span></code> and
<code class="docutils literal notranslate"><span class="pre">.video</span></code> operators can be used to reference the audio/video
portions of a stream so that they can be processed separately
and then re-combined later in the pipeline. This dilemma is
intrinsic to ffmpeg, and ffmpeg-python tries to stay out of the
way while users may refer to the official ffmpeg documentation
as to why certain filters drop audio.</p>
<p><code class="docutils literal notranslate"><span class="pre">stream.audio</span></code> is a shorthand for <code class="docutils literal notranslate"><span class="pre">stream['a']</span></code>.</p>
<p class="rubric">Example</p>
<p>Process the audio and video portions of a stream independently:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="nb">input</span> <span class="o">=</span> <span class="n">ffmpeg</span><span class="o">.</span><span class="n">input</span><span class="p">(</span><span class="s1">&#39;in.mp4&#39;</span><span class="p">)</span>
<span class="n">audio</span> <span class="o">=</span> <span class="nb">input</span><span class="o">.</span><span class="n">audio</span><span class="o">.</span><span class="n">filter</span><span class="p">(</span><span class="s2">&quot;aecho&quot;</span><span class="p">,</span> <span class="mf">0.8</span><span class="p">,</span> <span class="mf">0.9</span><span class="p">,</span> <span class="mi">1000</span><span class="p">,</span> <span class="mf">0.3</span><span class="p">)</span>
<span class="n">video</span> <span class="o">=</span> <span class="nb">input</span><span class="o">.</span><span class="n">video</span><span class="o">.</span><span class="n">hflip</span><span class="p">()</span>
<span class="n">out</span> <span class="o">=</span> <span class="n">ffmpeg</span><span class="o">.</span><span class="n">output</span><span class="p">(</span><span class="n">audio</span><span class="p">,</span> <span class="n">video</span><span class="p">,</span> <span class="s1">&#39;out.mp4&#39;</span><span class="p">)</span>
</pre></div>
</div>
</dd></dl>
<dl class="method">
<dt id="ffmpeg.Stream.video">
<em class="property">property </em><code class="sig-name descname">video</code><a class="headerlink" href="#ffmpeg.Stream.video" title="Permalink to this definition"></a></dt>
<dd><p>Select the video-portion of a stream.</p>
<p>Some ffmpeg filters drop audio streams, and care must be taken
to preserve the audio in the final output. The <code class="docutils literal notranslate"><span class="pre">.audio</span></code> and
<code class="docutils literal notranslate"><span class="pre">.video</span></code> operators can be used to reference the audio/video
portions of a stream so that they can be processed separately
and then re-combined later in the pipeline. This dilemma is
intrinsic to ffmpeg, and ffmpeg-python tries to stay out of the
way while users may refer to the official ffmpeg documentation
as to why certain filters drop audio.</p>
<p><code class="docutils literal notranslate"><span class="pre">stream.video</span></code> is a shorthand for <code class="docutils literal notranslate"><span class="pre">stream['v']</span></code>.</p>
<p class="rubric">Example</p>
<p>Process the audio and video portions of a stream independently:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="nb">input</span> <span class="o">=</span> <span class="n">ffmpeg</span><span class="o">.</span><span class="n">input</span><span class="p">(</span><span class="s1">&#39;in.mp4&#39;</span><span class="p">)</span>
<span class="n">audio</span> <span class="o">=</span> <span class="nb">input</span><span class="o">.</span><span class="n">audio</span><span class="o">.</span><span class="n">filter</span><span class="p">(</span><span class="s2">&quot;aecho&quot;</span><span class="p">,</span> <span class="mf">0.8</span><span class="p">,</span> <span class="mf">0.9</span><span class="p">,</span> <span class="mi">1000</span><span class="p">,</span> <span class="mf">0.3</span><span class="p">)</span>
<span class="n">video</span> <span class="o">=</span> <span class="nb">input</span><span class="o">.</span><span class="n">video</span><span class="o">.</span><span class="n">hflip</span><span class="p">()</span>
<span class="n">out</span> <span class="o">=</span> <span class="n">ffmpeg</span><span class="o">.</span><span class="n">output</span><span class="p">(</span><span class="n">audio</span><span class="p">,</span> <span class="n">video</span><span class="p">,</span> <span class="s1">&#39;out.mp4&#39;</span><span class="p">)</span>
</pre></div>
</div>
</dd></dl>
<dl class="method">
<dt id="ffmpeg.Stream.view">
<code class="sig-name descname">view</code><span class="sig-paren">(</span><em class="sig-param">detail=False</em>, <em class="sig-param">filename=None</em>, <em class="sig-param">pipe=False</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.Stream.view" title="Permalink to this definition"></a></dt>
<dd></dd></dl>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.input">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">input</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.input" title="Permalink to this definition"></a></dt>
<dd><p>Input file URL (ffmpeg <code class="docutils literal notranslate"><span class="pre">-i</span></code> option)</p>
<p>Any supplied kwargs are passed to ffmpeg verbatim (e.g. <code class="docutils literal notranslate"><span class="pre">t=20</span></code>,
<code class="docutils literal notranslate"><span class="pre">f='mp4'</span></code>, <code class="docutils literal notranslate"><span class="pre">acodec='pcm'</span></code>, etc.).</p>
<p>To tell ffmpeg to read from stdin, use <code class="docutils literal notranslate"><span class="pre">pipe:</span></code> as the filename.</p>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg.html#Main-options">Main options</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.merge_outputs">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">merge_outputs</code><span class="sig-paren">(</span><em class="sig-param">*streams</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.merge_outputs" title="Permalink to this definition"></a></dt>
<dd><p>Include all given outputs in one ffmpeg command line</p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.output">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">output</code><span class="sig-paren">(</span><em class="sig-param">*streams_and_filename</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.output" title="Permalink to this definition"></a></dt>
<dd><p>Output file URL</p>
<dl class="simple">
<dt>Syntax:</dt><dd><p><cite>ffmpeg.output(stream1[, stream2, stream3…], filename, **ffmpeg_args)</cite></p>
</dd>
</dl>
<p>Any supplied keyword arguments are passed to ffmpeg verbatim (e.g.
<code class="docutils literal notranslate"><span class="pre">t=20</span></code>, <code class="docutils literal notranslate"><span class="pre">f='mp4'</span></code>, <code class="docutils literal notranslate"><span class="pre">acodec='pcm'</span></code>, <code class="docutils literal notranslate"><span class="pre">vcodec='rawvideo'</span></code>,
etc.). Some keyword-arguments are handled specially, as shown below.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>video_bitrate</strong> parameter for <code class="docutils literal notranslate"><span class="pre">-b:v</span></code>, e.g. <code class="docutils literal notranslate"><span class="pre">video_bitrate=1000</span></code>.</p></li>
<li><p><strong>audio_bitrate</strong> parameter for <code class="docutils literal notranslate"><span class="pre">-b:a</span></code>, e.g. <code class="docutils literal notranslate"><span class="pre">audio_bitrate=200</span></code>.</p></li>
<li><p><strong>format</strong> alias for <code class="docutils literal notranslate"><span class="pre">-f</span></code> parameter, e.g. <code class="docutils literal notranslate"><span class="pre">format='mp4'</span></code>
(equivalent to <code class="docutils literal notranslate"><span class="pre">f='mp4'</span></code>).</p></li>
</ul>
</dd>
</dl>
<p>If multiple streams are provided, they are mapped to the same
output.</p>
<p>To tell ffmpeg to write to stdout, use <code class="docutils literal notranslate"><span class="pre">pipe:</span></code> as the filename.</p>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg.html#Synopsis">Synopsis</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.overwrite_output">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">overwrite_output</code><span class="sig-paren">(</span><em class="sig-param">stream</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.overwrite_output" title="Permalink to this definition"></a></dt>
<dd><p>Overwrite output files without asking (ffmpeg <code class="docutils literal notranslate"><span class="pre">-y</span></code> option)</p>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg.html#Main-options">Main options</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.probe">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">probe</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">cmd='ffprobe'</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.probe" title="Permalink to this definition"></a></dt>
<dd><p>Run ffprobe on the specified file and return a JSON representation of the output.</p>
<dl class="field-list simple">
<dt class="field-odd">Raises</dt>
<dd class="field-odd"><p><a class="reference internal" href="#ffmpeg.Error" title="ffmpeg.Error"><strong>ffmpeg.Error</strong></a> if ffprobe returns a non-zero exit code,
an <a class="reference internal" href="#ffmpeg.Error" title="ffmpeg.Error"><code class="xref py py-class docutils literal notranslate"><span class="pre">Error</span></code></a> is returned with a generic error message.
The stderr output can be retrieved by accessing the
<code class="docutils literal notranslate"><span class="pre">stderr</span></code> property of the exception.</p>
</dd>
</dl>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.compile">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">compile</code><span class="sig-paren">(</span><em class="sig-param">stream_spec</em>, <em class="sig-param">cmd='ffmpeg'</em>, <em class="sig-param">overwrite_output=False</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.compile" title="Permalink to this definition"></a></dt>
<dd><p>Build command-line for invoking ffmpeg.</p>
<p>The <a class="reference internal" href="#ffmpeg.run" title="ffmpeg.run"><code class="xref py py-meth docutils literal notranslate"><span class="pre">run()</span></code></a> function uses this to build the command line
arguments and should work in most cases, but calling this function
directly is useful for debugging or if you need to invoke ffmpeg
manually for whatever reason.</p>
<p>This is the same as calling <a class="reference internal" href="#ffmpeg.get_args" title="ffmpeg.get_args"><code class="xref py py-meth docutils literal notranslate"><span class="pre">get_args()</span></code></a> except that it also
includes the <code class="docutils literal notranslate"><span class="pre">ffmpeg</span></code> command as the first argument.</p>
</dd></dl>
<dl class="exception">
<dt id="ffmpeg.Error">
<em class="property">exception </em><code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">Error</code><span class="sig-paren">(</span><em class="sig-param">cmd</em>, <em class="sig-param">stdout</em>, <em class="sig-param">stderr</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.Error" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">Exception</span></code></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.get_args">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">get_args</code><span class="sig-paren">(</span><em class="sig-param">stream_spec</em>, <em class="sig-param">overwrite_output=False</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.get_args" title="Permalink to this definition"></a></dt>
<dd><p>Build command-line arguments to be passed to ffmpeg.</p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.run">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">run</code><span class="sig-paren">(</span><em class="sig-param">stream_spec</em>, <em class="sig-param">cmd='ffmpeg'</em>, <em class="sig-param">capture_stdout=False</em>, <em class="sig-param">capture_stderr=False</em>, <em class="sig-param">input=None</em>, <em class="sig-param">quiet=False</em>, <em class="sig-param">overwrite_output=False</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.run" title="Permalink to this definition"></a></dt>
<dd><p>Invoke ffmpeg for the supplied node graph.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>capture_stdout</strong> if True, capture stdout (to be used with
<code class="docutils literal notranslate"><span class="pre">pipe:</span></code> ffmpeg outputs).</p></li>
<li><p><strong>capture_stderr</strong> if True, capture stderr.</p></li>
<li><p><strong>quiet</strong> shorthand for setting <code class="docutils literal notranslate"><span class="pre">capture_stdout</span></code> and <code class="docutils literal notranslate"><span class="pre">capture_stderr</span></code>.</p></li>
<li><p><strong>input</strong> text to be sent to stdin (to be used with <code class="docutils literal notranslate"><span class="pre">pipe:</span></code>
ffmpeg inputs)</p></li>
<li><p><strong>**kwargs</strong> keyword-arguments passed to <code class="docutils literal notranslate"><span class="pre">get_args()</span></code> (e.g.
<code class="docutils literal notranslate"><span class="pre">overwrite_output=True</span></code>).</p></li>
</ul>
</dd>
</dl>
<p>Returns: (out, err) tuple containing captured stdout and stderr data.</p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.run_async">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">run_async</code><span class="sig-paren">(</span><em class="sig-param">stream_spec</em>, <em class="sig-param">cmd='ffmpeg'</em>, <em class="sig-param">pipe_stdin=False</em>, <em class="sig-param">pipe_stdout=False</em>, <em class="sig-param">pipe_stderr=False</em>, <em class="sig-param">quiet=False</em>, <em class="sig-param">overwrite_output=False</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.run_async" title="Permalink to this definition"></a></dt>
<dd><p>Asynchronously invoke ffmpeg for the supplied node graph.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>pipe_stdin</strong> if True, connect pipe to subprocess stdin (to be
used with <code class="docutils literal notranslate"><span class="pre">pipe:</span></code> ffmpeg inputs).</p></li>
<li><p><strong>pipe_stdout</strong> if True, connect pipe to subprocess stdout (to be
used with <code class="docutils literal notranslate"><span class="pre">pipe:</span></code> ffmpeg outputs).</p></li>
<li><p><strong>pipe_stderr</strong> if True, connect pipe to subprocess stderr.</p></li>
<li><p><strong>quiet</strong> shorthand for setting <code class="docutils literal notranslate"><span class="pre">capture_stdout</span></code> and
<code class="docutils literal notranslate"><span class="pre">capture_stderr</span></code>.</p></li>
<li><p><strong>**kwargs</strong> keyword-arguments passed to <code class="docutils literal notranslate"><span class="pre">get_args()</span></code> (e.g.
<code class="docutils literal notranslate"><span class="pre">overwrite_output=True</span></code>).</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p>A <a class="reference external" href="https://docs.python.org/3/library/subprocess.html#popen-objects">subprocess Popen</a> object representing the child process.</p>
</dd>
</dl>
<p class="rubric">Examples</p>
<p>Run and stream input:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">process</span> <span class="o">=</span> <span class="p">(</span>
<span class="n">ffmpeg</span>
<span class="o">.</span><span class="n">input</span><span class="p">(</span><span class="s1">&#39;pipe:&#39;</span><span class="p">,</span> <span class="nb">format</span><span class="o">=</span><span class="s1">&#39;rawvideo&#39;</span><span class="p">,</span> <span class="n">pix_fmt</span><span class="o">=</span><span class="s1">&#39;rgb24&#39;</span><span class="p">,</span> <span class="n">s</span><span class="o">=</span><span class="s1">&#39;</span><span class="si">{}</span><span class="s1">x</span><span class="si">{}</span><span class="s1">&#39;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">width</span><span class="p">,</span> <span class="n">height</span><span class="p">))</span>
<span class="o">.</span><span class="n">output</span><span class="p">(</span><span class="n">out_filename</span><span class="p">,</span> <span class="n">pix_fmt</span><span class="o">=</span><span class="s1">&#39;yuv420p&#39;</span><span class="p">)</span>
<span class="o">.</span><span class="n">overwrite_output</span><span class="p">()</span>
<span class="o">.</span><span class="n">run_async</span><span class="p">(</span><span class="n">pipe_stdin</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="p">)</span>
<span class="n">process</span><span class="o">.</span><span class="n">communicate</span><span class="p">(</span><span class="nb">input</span><span class="o">=</span><span class="n">input_data</span><span class="p">)</span>
</pre></div>
</div>
<p>Run and capture output:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">process</span> <span class="o">=</span> <span class="p">(</span>
<span class="n">ffmpeg</span>
<span class="o">.</span><span class="n">input</span><span class="p">(</span><span class="n">in_filename</span><span class="p">)</span>
<span class="o">.</span><span class="n">output</span><span class="p">(</span><span class="s1">&#39;pipe&#39;</span><span class="p">:,</span> <span class="nb">format</span><span class="o">=</span><span class="s1">&#39;rawvideo&#39;</span><span class="p">,</span> <span class="n">pix_fmt</span><span class="o">=</span><span class="s1">&#39;rgb24&#39;</span><span class="p">)</span>
<span class="o">.</span><span class="n">run_async</span><span class="p">(</span><span class="n">pipe_stdout</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">pipe_stderr</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="p">)</span>
<span class="n">out</span><span class="p">,</span> <span class="n">err</span> <span class="o">=</span> <span class="n">process</span><span class="o">.</span><span class="n">communicate</span><span class="p">()</span>
</pre></div>
</div>
<p>Process video frame-by-frame using numpy:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">process1</span> <span class="o">=</span> <span class="p">(</span>
<span class="n">ffmpeg</span>
<span class="o">.</span><span class="n">input</span><span class="p">(</span><span class="n">in_filename</span><span class="p">)</span>
<span class="o">.</span><span class="n">output</span><span class="p">(</span><span class="s1">&#39;pipe:&#39;</span><span class="p">,</span> <span class="nb">format</span><span class="o">=</span><span class="s1">&#39;rawvideo&#39;</span><span class="p">,</span> <span class="n">pix_fmt</span><span class="o">=</span><span class="s1">&#39;rgb24&#39;</span><span class="p">)</span>
<span class="o">.</span><span class="n">run_async</span><span class="p">(</span><span class="n">pipe_stdout</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="p">)</span>
<span class="n">process2</span> <span class="o">=</span> <span class="p">(</span>
<span class="n">ffmpeg</span>
<span class="o">.</span><span class="n">input</span><span class="p">(</span><span class="s1">&#39;pipe:&#39;</span><span class="p">,</span> <span class="nb">format</span><span class="o">=</span><span class="s1">&#39;rawvideo&#39;</span><span class="p">,</span> <span class="n">pix_fmt</span><span class="o">=</span><span class="s1">&#39;rgb24&#39;</span><span class="p">,</span> <span class="n">s</span><span class="o">=</span><span class="s1">&#39;</span><span class="si">{}</span><span class="s1">x</span><span class="si">{}</span><span class="s1">&#39;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">width</span><span class="p">,</span> <span class="n">height</span><span class="p">))</span>
<span class="o">.</span><span class="n">output</span><span class="p">(</span><span class="n">out_filename</span><span class="p">,</span> <span class="n">pix_fmt</span><span class="o">=</span><span class="s1">&#39;yuv420p&#39;</span><span class="p">)</span>
<span class="o">.</span><span class="n">overwrite_output</span><span class="p">()</span>
<span class="o">.</span><span class="n">run_async</span><span class="p">(</span><span class="n">pipe_stdin</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="p">)</span>
<span class="k">while</span> <span class="kc">True</span><span class="p">:</span>
<span class="n">in_bytes</span> <span class="o">=</span> <span class="n">process1</span><span class="o">.</span><span class="n">stdout</span><span class="o">.</span><span class="n">read</span><span class="p">(</span><span class="n">width</span> <span class="o">*</span> <span class="n">height</span> <span class="o">*</span> <span class="mi">3</span><span class="p">)</span>
<span class="k">if</span> <span class="ow">not</span> <span class="n">in_bytes</span><span class="p">:</span>
<span class="k">break</span>
<span class="n">in_frame</span> <span class="o">=</span> <span class="p">(</span>
<span class="n">np</span>
<span class="o">.</span><span class="n">frombuffer</span><span class="p">(</span><span class="n">in_bytes</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">uint8</span><span class="p">)</span>
<span class="o">.</span><span class="n">reshape</span><span class="p">([</span><span class="n">height</span><span class="p">,</span> <span class="n">width</span><span class="p">,</span> <span class="mi">3</span><span class="p">])</span>
<span class="p">)</span>
<span class="n">out_frame</span> <span class="o">=</span> <span class="n">in_frame</span> <span class="o">*</span> <span class="mf">0.3</span>
<span class="n">process2</span><span class="o">.</span><span class="n">stdin</span><span class="o">.</span><span class="n">write</span><span class="p">(</span>
<span class="n">frame</span>
<span class="o">.</span><span class="n">astype</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">uint8</span><span class="p">)</span>
<span class="o">.</span><span class="n">tobytes</span><span class="p">()</span>
<span class="p">)</span>
<span class="n">process2</span><span class="o">.</span><span class="n">stdin</span><span class="o">.</span><span class="n">close</span><span class="p">()</span>
<span class="n">process1</span><span class="o">.</span><span class="n">wait</span><span class="p">()</span>
<span class="n">process2</span><span class="o">.</span><span class="n">wait</span><span class="p">()</span>
</pre></div>
</div>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.view">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">view</code><span class="sig-paren">(</span><em class="sig-param">stream_spec</em>, <em class="sig-param">detail=False</em>, <em class="sig-param">filename=None</em>, <em class="sig-param">pipe=False</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.view" title="Permalink to this definition"></a></dt>
<dd></dd></dl>
<dl class="function">
<dt id="ffmpeg.colorchannelmixer">
<code class="descclassname">ffmpeg.</code><code class="descname">colorchannelmixer</code><span class="sig-paren">(</span><em>parent_node</em>, <em>*args</em>, <em>**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.colorchannelmixer" title="Permalink to this definition"></a></dt>
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">colorchannelmixer</code><span class="sig-paren">(</span><em class="sig-param">stream</em>, <em class="sig-param">*args</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.colorchannelmixer" title="Permalink to this definition"></a></dt>
<dd><p>Adjust video input frames by re-mixing color channels.</p>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg-filters.html#colorchannelmixer">colorchannelmixer</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.concat">
<code class="descclassname">ffmpeg.</code><code class="descname">concat</code><span class="sig-paren">(</span><em>*parent_nodes</em>, <em>**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.concat" title="Permalink to this definition"></a></dt>
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">concat</code><span class="sig-paren">(</span><em class="sig-param">*streams</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.concat" title="Permalink to this definition"></a></dt>
<dd><p>Concatenate audio and video streams, joining them together one after the other.</p>
<p>The filter works on segments of synchronized video and audio streams. All segments must have the same number of
streams of each type, and that will also be the number of streams at output.</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><strong>unsafe</strong> Activate unsafe mode: do not fail if segments have a different format.</td>
</tr>
</tbody>
</table>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>unsafe</strong> Activate unsafe mode: do not fail if segments have a different format.</p>
</dd>
</dl>
<p>Related streams do not always have exactly the same duration, for various reasons including codec frame size or
sloppy authoring. For that reason, related synchronized streams (e.g. a video and its audio track) should be
concatenated at once. The concat filter will use the duration of the longest stream in each segment (except the
@ -93,297 +329,383 @@ output file to handle it.</p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.drawbox">
<code class="descclassname">ffmpeg.</code><code class="descname">drawbox</code><span class="sig-paren">(</span><em>parent_node</em>, <em>x</em>, <em>y</em>, <em>width</em>, <em>height</em>, <em>color</em>, <em>thickness=None</em>, <em>**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.drawbox" title="Permalink to this definition"></a></dt>
<dd><p>Draw a colored box on the input image.</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple">
<li><strong>x</strong> The expression which specifies the top left corner x coordinate of the box. It defaults to 0.</li>
<li><strong>y</strong> The expression which specifies the top left corner y coordinate of the box. It defaults to 0.</li>
<li><strong>width</strong> Specify the width of the box; if 0 interpreted as the input width. It defaults to 0.</li>
<li><strong>heigth</strong> Specify the height of the box; if 0 interpreted as the input height. It defaults to 0.</li>
<li><strong>color</strong> Specify the color of the box to write. For the general syntax of this option, check the “Color” section
in the ffmpeg-utils manual. If the special value invert is used, the box edge color is the same as the
video with inverted luma.</li>
<li><strong>thickness</strong> The expression which sets the thickness of the box edge. Default value is 3.</li>
<li><strong>w</strong> Alias for <code class="docutils literal"><span class="pre">width</span></code>.</li>
<li><strong>h</strong> Alias for <code class="docutils literal"><span class="pre">height</span></code>.</li>
<li><strong>c</strong> Alias for <code class="docutils literal"><span class="pre">color</span></code>.</li>
<li><strong>t</strong> Alias for <code class="docutils literal"><span class="pre">thickness</span></code>.</li>
<dt id="ffmpeg.crop">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">crop</code><span class="sig-paren">(</span><em class="sig-param">stream</em>, <em class="sig-param">x</em>, <em class="sig-param">y</em>, <em class="sig-param">width</em>, <em class="sig-param">height</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.crop" title="Permalink to this definition"></a></dt>
<dd><p>Crop the input video.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> The horizontal position, in the input video, of the left edge of
the output video.</p></li>
<li><p><strong>y</strong> The vertical position, in the input video, of the top edge of the
output video.</p></li>
<li><p><strong>width</strong> The width of the output video. Must be greater than 0.</p></li>
<li><p><strong>height</strong> The height of the output video. Must be greater than 0.</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</dd>
</dl>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg-filters.html#crop">crop</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.drawbox">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">drawbox</code><span class="sig-paren">(</span><em class="sig-param">stream</em>, <em class="sig-param">x</em>, <em class="sig-param">y</em>, <em class="sig-param">width</em>, <em class="sig-param">height</em>, <em class="sig-param">color</em>, <em class="sig-param">thickness=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.drawbox" title="Permalink to this definition"></a></dt>
<dd><p>Draw a colored box on the input image.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> The expression which specifies the top left corner x coordinate of the box. It defaults to 0.</p></li>
<li><p><strong>y</strong> The expression which specifies the top left corner y coordinate of the box. It defaults to 0.</p></li>
<li><p><strong>width</strong> Specify the width of the box; if 0 interpreted as the input width. It defaults to 0.</p></li>
<li><p><strong>height</strong> Specify the height of the box; if 0 interpreted as the input height. It defaults to 0.</p></li>
<li><p><strong>color</strong> Specify the color of the box to write. For the general syntax of this option, check the “Color” section
in the ffmpeg-utils manual. If the special value invert is used, the box edge color is the same as the
video with inverted luma.</p></li>
<li><p><strong>thickness</strong> The expression which sets the thickness of the box edge. Default value is 3.</p></li>
<li><p><strong>w</strong> Alias for <code class="docutils literal notranslate"><span class="pre">width</span></code>.</p></li>
<li><p><strong>h</strong> Alias for <code class="docutils literal notranslate"><span class="pre">height</span></code>.</p></li>
<li><p><strong>c</strong> Alias for <code class="docutils literal notranslate"><span class="pre">color</span></code>.</p></li>
<li><p><strong>t</strong> Alias for <code class="docutils literal notranslate"><span class="pre">thickness</span></code>.</p></li>
</ul>
</dd>
</dl>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg-filters.html#drawbox">drawbox</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.filter_">
<code class="descclassname">ffmpeg.</code><code class="descname">filter_</code><span class="sig-paren">(</span><em>parent_node</em>, <em>filter_name</em>, <em>*args</em>, <em>**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.filter_" title="Permalink to this definition"></a></dt>
<dd><p>Apply custom single-source filter.</p>
<p><code class="docutils literal"><span class="pre">filter_</span></code> is normally used by higher-level filter functions such as <code class="docutils literal"><span class="pre">hflip</span></code>, but if a filter implementation
is missing from <code class="docutils literal"><span class="pre">fmpeg-python</span></code>, you can call <code class="docutils literal"><span class="pre">filter_</span></code> directly to have <code class="docutils literal"><span class="pre">fmpeg-python</span></code> pass the filter name
and arguments to ffmpeg verbatim.</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple">
<li><strong>parent_node</strong> Source stream to apply filter to.</li>
<li><strong>filter_name</strong> ffmpeg filter name, e.g. <cite>colorchannelmixer</cite></li>
<li><strong>*args</strong> list of args to pass to ffmpeg verbatim</li>
<li><strong>**kwargs</strong> list of keyword-args to pass to ffmpeg verbatim</li>
<dt id="ffmpeg.drawtext">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">drawtext</code><span class="sig-paren">(</span><em class="sig-param">stream</em>, <em class="sig-param">text=None</em>, <em class="sig-param">x=0</em>, <em class="sig-param">y=0</em>, <em class="sig-param">escape_text=True</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.drawtext" title="Permalink to this definition"></a></dt>
<dd><p>Draw a text string or text from a specified file on top of a video, using the libfreetype library.</p>
<p>To enable compilation of this filter, you need to configure FFmpeg with <code class="docutils literal notranslate"><span class="pre">--enable-libfreetype</span></code>. To enable default
font fallback and the font option you need to configure FFmpeg with <code class="docutils literal notranslate"><span class="pre">--enable-libfontconfig</span></code>. To enable the
text_shaping option, you need to configure FFmpeg with <code class="docutils literal notranslate"><span class="pre">--enable-libfribidi</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>box</strong> Used to draw a box around text using the background color. The value must be either 1 (enable) or 0
(disable). The default value of box is 0.</p></li>
<li><p><strong>boxborderw</strong> Set the width of the border to be drawn around the box using boxcolor. The default value of
boxborderw is 0.</p></li>
<li><p><strong>boxcolor</strong> The color to be used for drawing box around text. For the syntax of this option, check the “Color”
section in the ffmpeg-utils manual. The default value of boxcolor is “white”.</p></li>
<li><p><strong>line_spacing</strong> Set the line spacing in pixels of the border to be drawn around the box using box. The default
value of line_spacing is 0.</p></li>
<li><p><strong>borderw</strong> Set the width of the border to be drawn around the text using bordercolor. The default value of
borderw is 0.</p></li>
<li><p><strong>bordercolor</strong> Set the color to be used for drawing border around text. For the syntax of this option, check the
“Color” section in the ffmpeg-utils manual. The default value of bordercolor is “black”.</p></li>
<li><p><strong>expansion</strong> Select how the text is expanded. Can be either none, strftime (deprecated) or normal (default). See
the Text expansion section below for details.</p></li>
<li><p><strong>basetime</strong> Set a start time for the count. Value is in microseconds. Only applied in the deprecated strftime
expansion mode. To emulate in normal expansion mode use the pts function, supplying the start time (in
seconds) as the second argument.</p></li>
<li><p><strong>fix_bounds</strong> If true, check and fix text coords to avoid clipping.</p></li>
<li><p><strong>fontcolor</strong> The color to be used for drawing fonts. For the syntax of this option, check the “Color” section in
the ffmpeg-utils manual. The default value of fontcolor is “black”.</p></li>
<li><p><strong>fontcolor_expr</strong> String which is expanded the same way as text to obtain dynamic fontcolor value. By default
this option has empty value and is not processed. When this option is set, it overrides fontcolor option.</p></li>
<li><p><strong>font</strong> The font family to be used for drawing text. By default Sans.</p></li>
<li><p><strong>fontfile</strong> The font file to be used for drawing text. The path must be included. This parameter is mandatory if
the fontconfig support is disabled.</p></li>
<li><p><strong>alpha</strong> Draw the text applying alpha blending. The value can be a number between 0.0 and 1.0. The expression
accepts the same variables x, y as well. The default value is 1. Please see fontcolor_expr.</p></li>
<li><p><strong>fontsize</strong> The font size to be used for drawing text. The default value of fontsize is 16.</p></li>
<li><p><strong>text_shaping</strong> If set to 1, attempt to shape the text (for example, reverse the order of right-to-left text and
join Arabic characters) before drawing it. Otherwise, just draw the text exactly as given. By default 1 (if
supported).</p></li>
<li><p><strong>ft_load_flags</strong> <p>The flags to be used for loading the fonts. The flags map the corresponding flags supported by
libfreetype, and are a combination of the following values:</p>
<ul>
<li><p><code class="docutils literal notranslate"><span class="pre">default</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">no_scale</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">no_hinting</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">render</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">no_bitmap</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">vertical_layout</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">force_autohint</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">crop_bitmap</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">pedantic</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">ignore_global_advance_width</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">no_recurse</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">ignore_transform</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">monochrome</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">linear_design</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">no_autohint</span></code></p></li>
</ul>
</td>
</tr>
</tbody>
</table>
<p>This function is used internally by all of the other single-source filters (e.g. <code class="docutils literal"><span class="pre">hflip</span></code>, <code class="docutils literal"><span class="pre">crop</span></code>, etc.).
For custom multi-source filters, see <code class="docutils literal"><span class="pre">filter_multi</span></code> instead.</p>
<p>The function name is suffixed with <code class="docutils literal"><span class="pre">_</span></code> in order avoid confusion with the standard python <code class="docutils literal"><span class="pre">filter</span></code> function.</p>
<p class="rubric">Example</p>
<p><code class="docutils literal"><span class="pre">ffmpeg.input('in.mp4').filter_('hflip').output('out.mp4').run()</span></code></p>
<p>Default value is “default”. For more information consult the documentation for the FT_LOAD_* libfreetype
flags.</p>
</p></li>
<li><p><strong>shadowcolor</strong> The color to be used for drawing a shadow behind the drawn text. For the syntax of this option,
check the “Color” section in the ffmpeg-utils manual. The default value of shadowcolor is “black”.</p></li>
<li><p><strong>shadowx</strong> The x offset for the text shadow position with respect to the position of the text. It can be either
positive or negative values. The default value is “0”.</p></li>
<li><p><strong>shadowy</strong> The y offset for the text shadow position with respect to the position of the text. It can be either
positive or negative values. The default value is “0”.</p></li>
<li><p><strong>start_number</strong> The starting frame number for the n/frame_num variable. The default value is “0”.</p></li>
<li><p><strong>tabsize</strong> The size in number of spaces to use for rendering the tab. Default value is 4.</p></li>
<li><p><strong>timecode</strong> Set the initial timecode representation in “hh:mm:ss[:;.]ff” format. It can be used with or without
text parameter. timecode_rate option must be specified.</p></li>
<li><p><strong>rate</strong> Set the timecode frame rate (timecode only).</p></li>
<li><p><strong>timecode_rate</strong> Alias for <code class="docutils literal notranslate"><span class="pre">rate</span></code>.</p></li>
<li><p><strong>r</strong> Alias for <code class="docutils literal notranslate"><span class="pre">rate</span></code>.</p></li>
<li><p><strong>tc24hmax</strong> If set to 1, the output of the timecode option will wrap around at 24 hours. Default is 0 (disabled).</p></li>
<li><p><strong>text</strong> The text string to be drawn. The text must be a sequence of UTF-8 encoded characters. This parameter is
mandatory if no file is specified with the parameter textfile.</p></li>
<li><p><strong>textfile</strong> A text file containing text to be drawn. The text must be a sequence of UTF-8 encoded characters.
This parameter is mandatory if no text string is specified with the parameter text. If both text and
textfile are specified, an error is thrown.</p></li>
<li><p><strong>reload</strong> If set to 1, the textfile will be reloaded before each frame. Be sure to update it atomically, or it
may be read partially, or even fail.</p></li>
<li><p><strong>x</strong> The expression which specifies the offset where text will be drawn within the video frame. It is relative to
the left border of the output image. The default value is “0”.</p></li>
<li><p><strong>y</strong> The expression which specifies the offset where text will be drawn within the video frame. It is relative to
the top border of the output image. The default value is “0”. See below for the list of accepted constants
and functions.</p></li>
</ul>
</dd>
</dl>
<dl>
<dt>Expression constants:</dt><dd><dl class="simple">
<dt>The parameters for x and y are expressions containing the following constants and functions:</dt><dd><ul class="simple">
<li><p>dar: input display aspect ratio, it is the same as <code class="docutils literal notranslate"><span class="pre">(w</span> <span class="pre">/</span> <span class="pre">h)</span> <span class="pre">*</span> <span class="pre">sar</span></code></p></li>
<li><p>hsub: horizontal chroma subsample values. For example for the pixel format “yuv422p” hsub is 2 and vsub
is 1.</p></li>
<li><p>vsub: vertical chroma subsample values. For example for the pixel format “yuv422p” hsub is 2 and vsub
is 1.</p></li>
<li><p>line_h: the height of each text line</p></li>
<li><p>lh: Alias for <code class="docutils literal notranslate"><span class="pre">line_h</span></code>.</p></li>
<li><p>main_h: the input height</p></li>
<li><p>h: Alias for <code class="docutils literal notranslate"><span class="pre">main_h</span></code>.</p></li>
<li><p>H: Alias for <code class="docutils literal notranslate"><span class="pre">main_h</span></code>.</p></li>
<li><p>main_w: the input width</p></li>
<li><p>w: Alias for <code class="docutils literal notranslate"><span class="pre">main_w</span></code>.</p></li>
<li><p>W: Alias for <code class="docutils literal notranslate"><span class="pre">main_w</span></code>.</p></li>
<li><p>ascent: the maximum distance from the baseline to the highest/upper grid coordinate used to place a glyph
outline point, for all the rendered glyphs. It is a positive value, due to the grids orientation with the Y
axis upwards.</p></li>
<li><p>max_glyph_a: Alias for <code class="docutils literal notranslate"><span class="pre">ascent</span></code>.</p></li>
<li><p>descent: the maximum distance from the baseline to the lowest grid coordinate used to place a glyph outline
point, for all the rendered glyphs. This is a negative value, due to the grids orientation, with the Y axis
upwards.</p></li>
<li><p>max_glyph_d: Alias for <code class="docutils literal notranslate"><span class="pre">descent</span></code>.</p></li>
<li><p>max_glyph_h: maximum glyph height, that is the maximum height for all the glyphs contained in the rendered
text, it is equivalent to ascent - descent.</p></li>
<li><p>max_glyph_w: maximum glyph width, that is the maximum width for all the glyphs contained in the rendered
text.</p></li>
<li><p>n: the number of input frame, starting from 0</p></li>
<li><p>rand(min, max): return a random number included between min and max</p></li>
<li><p>sar: The input sample aspect ratio.</p></li>
<li><p>t: timestamp expressed in seconds, NAN if the input timestamp is unknown</p></li>
<li><p>text_h: the height of the rendered text</p></li>
<li><p>th: Alias for <code class="docutils literal notranslate"><span class="pre">text_h</span></code>.</p></li>
<li><p>text_w: the width of the rendered text</p></li>
<li><p>tw: Alias for <code class="docutils literal notranslate"><span class="pre">text_w</span></code>.</p></li>
<li><p>x: the x offset coordinates where the text is drawn.</p></li>
<li><p>y: the y offset coordinates where the text is drawn.</p></li>
</ul>
</dd>
</dl>
<p>These parameters allow the x and y expressions to refer each other, so you can for example specify
<code class="docutils literal notranslate"><span class="pre">y=x/dar</span></code>.</p>
</dd>
</dl>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg-filters.html#drawtext">drawtext</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.filter_multi">
<code class="descclassname">ffmpeg.</code><code class="descname">filter_multi</code><span class="sig-paren">(</span><em>parent_nodes</em>, <em>filter_name</em>, <em>*args</em>, <em>**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.filter_multi" title="Permalink to this definition"></a></dt>
<dd><p>Apply custom multi-source filter.</p>
<p>This is nearly identical to the <code class="docutils literal"><span class="pre">filter</span></code> function except that it allows filters to be applied to multiple
streams. Its normally used by higher-level filter functions such as <code class="docutils literal"><span class="pre">concat</span></code>, but if a filter implementation
is missing from <code class="docutils literal"><span class="pre">fmpeg-python</span></code>, you can call <code class="docutils literal"><span class="pre">filter_multi</span></code> directly.</p>
<p>Note that because it applies to multiple streams, it cant be used as an operator, unlike the <code class="docutils literal"><span class="pre">filter</span></code> function
(e.g. <code class="docutils literal"><span class="pre">ffmpeg.input('in.mp4').filter_('hflip')</span></code>)</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple">
<li><strong>parent_nodes</strong> List of source streams to apply filter to.</li>
<li><strong>filter_name</strong> ffmpeg filter name, e.g. <cite>concat</cite></li>
<li><strong>*args</strong> list of args to pass to ffmpeg verbatim</li>
<li><strong>**kwargs</strong> list of keyword-args to pass to ffmpeg verbatim</li>
<dt id="ffmpeg.filter">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">filter</code><span class="sig-paren">(</span><em class="sig-param">stream_spec</em>, <em class="sig-param">filter_name</em>, <em class="sig-param">*args</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.filter" title="Permalink to this definition"></a></dt>
<dd><p>Apply custom filter.</p>
<p><code class="docutils literal notranslate"><span class="pre">filter_</span></code> is normally used by higher-level filter functions such as <code class="docutils literal notranslate"><span class="pre">hflip</span></code>, but if a filter implementation
is missing from <code class="docutils literal notranslate"><span class="pre">ffmpeg-python</span></code>, you can call <code class="docutils literal notranslate"><span class="pre">filter_</span></code> directly to have <code class="docutils literal notranslate"><span class="pre">ffmpeg-python</span></code> pass the filter name
and arguments to ffmpeg verbatim.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>stream_spec</strong> a Stream, list of Streams, or label-to-Stream dictionary mapping</p></li>
<li><p><strong>filter_name</strong> ffmpeg filter name, e.g. <cite>colorchannelmixer</cite></p></li>
<li><p><strong>*args</strong> list of args to pass to ffmpeg verbatim</p></li>
<li><p><strong>**kwargs</strong> list of keyword-args to pass to ffmpeg verbatim</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
<p>For custom single-source filters, see <code class="docutils literal"><span class="pre">filter_multi</span></code> instead.</p>
</dd>
</dl>
<p>The function name is suffixed with <code class="docutils literal notranslate"><span class="pre">_</span></code> in order avoid confusion with the standard python <code class="docutils literal notranslate"><span class="pre">filter</span></code> function.</p>
<p class="rubric">Example</p>
<p><code class="docutils literal"><span class="pre">ffmpeg.filter_multi(ffmpeg.input('in1.mp4'),</span> <span class="pre">ffmpeg.input('in2.mp4'),</span> <span class="pre">'concat',</span> <span class="pre">n=2).output('out.mp4').run()</span></code></p>
<p><code class="docutils literal notranslate"><span class="pre">ffmpeg.input('in.mp4').filter('hflip').output('out.mp4').run()</span></code></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.filter_">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">filter_</code><span class="sig-paren">(</span><em class="sig-param">stream_spec</em>, <em class="sig-param">filter_name</em>, <em class="sig-param">*args</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.filter_" title="Permalink to this definition"></a></dt>
<dd><p>Alternate name for <code class="docutils literal notranslate"><span class="pre">filter</span></code>, so as to not collide with the
built-in python <code class="docutils literal notranslate"><span class="pre">filter</span></code> operator.</p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.filter_multi_output">
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">filter_multi_output</code><span class="sig-paren">(</span><em class="sig-param">stream_spec</em>, <em class="sig-param">filter_name</em>, <em class="sig-param">*args</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.filter_multi_output" title="Permalink to this definition"></a></dt>
<dd><p>Apply custom filter with one or more outputs.</p>
<p>This is the same as <code class="docutils literal notranslate"><span class="pre">filter</span></code> except that the filter can produce more than one output.</p>
<p>To reference an output stream, use either the <code class="docutils literal notranslate"><span class="pre">.stream</span></code> operator or bracket shorthand:</p>
<p class="rubric">Example</p>
<p><code class="docutils literal notranslate"><span class="pre">`</span>
<span class="pre">split</span> <span class="pre">=</span> <span class="pre">ffmpeg.input('in.mp4').filter_multi_output('split')</span>
<span class="pre">split0</span> <span class="pre">=</span> <span class="pre">split.stream(0)</span>
<span class="pre">split1</span> <span class="pre">=</span> <span class="pre">split[1]</span>
<span class="pre">ffmpeg.concat(split0,</span> <span class="pre">split1).output('out.mp4').run()</span>
<span class="pre">`</span></code></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.hflip">
<code class="descclassname">ffmpeg.</code><code class="descname">hflip</code><span class="sig-paren">(</span><em>parent_node</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.hflip" title="Permalink to this definition"></a></dt>
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">hflip</code><span class="sig-paren">(</span><em class="sig-param">stream</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.hflip" title="Permalink to this definition"></a></dt>
<dd><p>Flip the input video horizontally.</p>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg-filters.html#hflip">hflip</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.hue">
<code class="descclassname">ffmpeg.</code><code class="descname">hue</code><span class="sig-paren">(</span><em>parent_node</em>, <em>**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.hue" title="Permalink to this definition"></a></dt>
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">hue</code><span class="sig-paren">(</span><em class="sig-param">stream</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.hue" title="Permalink to this definition"></a></dt>
<dd><p>Modify the hue and/or the saturation of the input.</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple">
<li><strong>h</strong> Specify the hue angle as a number of degrees. It accepts an expression, and defaults to “0”.</li>
<li><strong>s</strong> Specify the saturation in the [-10,10] range. It accepts an expression and defaults to “1”.</li>
<li><strong>H</strong> Specify the hue angle as a number of radians. It accepts an expression, and defaults to “0”.</li>
<li><strong>b</strong> Specify the brightness in the [-10,10] range. It accepts an expression and defaults to “0”.</li>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>h</strong> Specify the hue angle as a number of degrees. It accepts an expression, and defaults to “0”.</p></li>
<li><p><strong>s</strong> Specify the saturation in the [-10,10] range. It accepts an expression and defaults to “1”.</p></li>
<li><p><strong>H</strong> Specify the hue angle as a number of radians. It accepts an expression, and defaults to “0”.</p></li>
<li><p><strong>b</strong> Specify the brightness in the [-10,10] range. It accepts an expression and defaults to “0”.</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</dd>
</dl>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg-filters.html#hue">hue</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.overlay">
<code class="descclassname">ffmpeg.</code><code class="descname">overlay</code><span class="sig-paren">(</span><em>main_parent_node</em>, <em>overlay_parent_node</em>, <em>eof_action=repeat</em>, <em>**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.overlay" title="Permalink to this definition"></a></dt>
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">overlay</code><span class="sig-paren">(</span><em class="sig-param">main_parent_node</em>, <em class="sig-param">overlay_parent_node</em>, <em class="sig-param">eof_action='repeat'</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.overlay" title="Permalink to this definition"></a></dt>
<dd><p>Overlay one video on top of another.</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple">
<li><strong>x</strong> Set the expression for the x coordinates of the overlaid video on the main video. Default value is 0. In
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> Set the expression for the x coordinates of the overlaid video on the main video. Default value is 0. In
case the expression is invalid, it is set to a huge value (meaning that the overlay will not be displayed
within the output visible area).</li>
<li><strong>y</strong> Set the expression for the y coordinates of the overlaid video on the main video. Default value is 0. In
within the output visible area).</p></li>
<li><p><strong>y</strong> Set the expression for the y coordinates of the overlaid video on the main video. Default value is 0. In
case the expression is invalid, it is set to a huge value (meaning that the overlay will not be displayed
within the output visible area).</li>
<li><strong>eof_action</strong> <p>The action to take when EOF is encountered on the secondary input; it accepts one of the following
within the output visible area).</p></li>
<li><p><strong>eof_action</strong> <p>The action to take when EOF is encountered on the secondary input; it accepts one of the following
values:</p>
<ul>
<li><code class="docutils literal"><span class="pre">repeat</span></code>: Repeat the last frame (the default).</li>
<li><code class="docutils literal"><span class="pre">endall</span></code>: End both streams.</li>
<li><code class="docutils literal"><span class="pre">pass</span></code>: Pass the main input through.</li>
<li><p><code class="docutils literal notranslate"><span class="pre">repeat</span></code>: Repeat the last frame (the default).</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">endall</span></code>: End both streams.</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">pass</span></code>: Pass the main input through.</p></li>
</ul>
</li>
<li><strong>eval</strong> <p>Set when the expressions for x, and y are evaluated.
</p></li>
<li><p><strong>eval</strong> <p>Set when the expressions for x, and y are evaluated.
It accepts the following values:</p>
<ul>
<li><dl class="first docutils">
<dt><code class="docutils literal"><span class="pre">init</span></code>: only evaluate expressions once during the filter initialization or when a command is</dt>
<dd>processed</dd>
<li><dl class="simple">
<dt><code class="docutils literal notranslate"><span class="pre">init</span></code>: only evaluate expressions once during the filter initialization or when a command is</dt><dd><p>processed</p>
</dd>
</dl>
</li>
<li><code class="docutils literal"><span class="pre">frame</span></code>: evaluate expressions for each incoming frame</li>
<li><p><code class="docutils literal notranslate"><span class="pre">frame</span></code>: evaluate expressions for each incoming frame</p></li>
</ul>
<p>Default value is <code class="docutils literal"><span class="pre">frame</span></code>.</p>
</li>
<li><strong>shortest</strong> If set to 1, force the output to terminate when the shortest input terminates. Default value is 0.</li>
<li><strong>format</strong> <p>Set the format for the output video.
<p>Default value is <code class="docutils literal notranslate"><span class="pre">frame</span></code>.</p>
</p></li>
<li><p><strong>shortest</strong> If set to 1, force the output to terminate when the shortest input terminates. Default value is 0.</p></li>
<li><p><strong>format</strong> <p>Set the format for the output video.
It accepts the following values:</p>
<ul>
<li><code class="docutils literal"><span class="pre">yuv420</span></code>: force YUV420 output</li>
<li><code class="docutils literal"><span class="pre">yuv422</span></code>: force YUV422 output</li>
<li><code class="docutils literal"><span class="pre">yuv444</span></code>: force YUV444 output</li>
<li><code class="docutils literal"><span class="pre">rgb</span></code>: force packed RGB output</li>
<li><code class="docutils literal"><span class="pre">gbrp</span></code>: force planar RGB output</li>
<li><p><code class="docutils literal notranslate"><span class="pre">yuv420</span></code>: force YUV420 output</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">yuv422</span></code>: force YUV422 output</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">yuv444</span></code>: force YUV444 output</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">rgb</span></code>: force packed RGB output</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">gbrp</span></code>: force planar RGB output</p></li>
</ul>
<p>Default value is <code class="docutils literal"><span class="pre">yuv420</span></code>.</p>
</li>
<li><strong>rgb</strong> (<em>deprecated</em>) If set to 1, force the filter to accept inputs in the RGB color space. Default value is 0.
This option is deprecated, use format instead.</li>
<li><strong>repeatlast</strong> If set to 1, force the filter to draw the last overlay frame over the main input until the end of
the stream. A value of 0 disables this behavior. Default value is 1.</li>
<p>Default value is <code class="docutils literal notranslate"><span class="pre">yuv420</span></code>.</p>
</p></li>
<li><p><strong>rgb</strong> (<em>deprecated</em>) If set to 1, force the filter to accept inputs in the RGB color space. Default value is 0.
This option is deprecated, use format instead.</p></li>
<li><p><strong>repeatlast</strong> If set to 1, force the filter to draw the last overlay frame over the main input until the end of
the stream. A value of 0 disables this behavior. Default value is 1.</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</dd>
</dl>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg-filters.html#overlay-1">overlay</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.setpts">
<code class="descclassname">ffmpeg.</code><code class="descname">setpts</code><span class="sig-paren">(</span><em>parent_node</em>, <em>expr</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.setpts" title="Permalink to this definition"></a></dt>
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">setpts</code><span class="sig-paren">(</span><em class="sig-param">stream</em>, <em class="sig-param">expr</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.setpts" title="Permalink to this definition"></a></dt>
<dd><p>Change the PTS (presentation timestamp) of the input frames.</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><strong>expr</strong> The expression which is evaluated for each frame to construct its timestamp.</td>
</tr>
</tbody>
</table>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>expr</strong> The expression which is evaluated for each frame to construct its timestamp.</p>
</dd>
</dl>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg-filters.html#setpts_002c-asetpts">setpts, asetpts</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.trim">
<code class="descclassname">ffmpeg.</code><code class="descname">trim</code><span class="sig-paren">(</span><em>parent_node</em>, <em>**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.trim" title="Permalink to this definition"></a></dt>
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">trim</code><span class="sig-paren">(</span><em class="sig-param">stream</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.trim" title="Permalink to this definition"></a></dt>
<dd><p>Trim the input so that the output contains one continuous subpart of the input.</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple">
<li><strong>start</strong> Specify the time of the start of the kept section, i.e. the frame with the timestamp start will be the
first frame in the output.</li>
<li><strong>end</strong> Specify the time of the first frame that will be dropped, i.e. the frame immediately preceding the one
with the timestamp end will be the last frame in the output.</li>
<li><strong>start_pts</strong> This is the same as start, except this option sets the start timestamp in timebase units instead of
seconds.</li>
<li><strong>end_pts</strong> This is the same as end, except this option sets the end timestamp in timebase units instead of
seconds.</li>
<li><strong>duration</strong> The maximum duration of the output in seconds.</li>
<li><strong>start_frame</strong> The number of the first frame that should be passed to the output.</li>
<li><strong>end_frame</strong> The number of the first frame that should be dropped.</li>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>start</strong> Specify the time of the start of the kept section, i.e. the frame with the timestamp start will be the
first frame in the output.</p></li>
<li><p><strong>end</strong> Specify the time of the first frame that will be dropped, i.e. the frame immediately preceding the one
with the timestamp end will be the last frame in the output.</p></li>
<li><p><strong>start_pts</strong> This is the same as start, except this option sets the start timestamp in timebase units instead of
seconds.</p></li>
<li><p><strong>end_pts</strong> This is the same as end, except this option sets the end timestamp in timebase units instead of
seconds.</p></li>
<li><p><strong>duration</strong> The maximum duration of the output in seconds.</p></li>
<li><p><strong>start_frame</strong> The number of the first frame that should be passed to the output.</p></li>
<li><p><strong>end_frame</strong> The number of the first frame that should be dropped.</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</dd>
</dl>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg-filters.html#trim">trim</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.vflip">
<code class="descclassname">ffmpeg.</code><code class="descname">vflip</code><span class="sig-paren">(</span><em>parent_node</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.vflip" title="Permalink to this definition"></a></dt>
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">vflip</code><span class="sig-paren">(</span><em class="sig-param">stream</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.vflip" title="Permalink to this definition"></a></dt>
<dd><p>Flip the input video vertically.</p>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg-filters.html#vflip">vflip</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.zoompan">
<code class="descclassname">ffmpeg.</code><code class="descname">zoompan</code><span class="sig-paren">(</span><em>parent_node</em>, <em>**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.zoompan" title="Permalink to this definition"></a></dt>
<code class="sig-prename descclassname">ffmpeg.</code><code class="sig-name descname">zoompan</code><span class="sig-paren">(</span><em class="sig-param">stream</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.zoompan" title="Permalink to this definition"></a></dt>
<dd><p>Apply Zoom &amp; Pan effect.</p>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />
<col class="field-body" />
<tbody valign="top">
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple">
<li><strong>zoom</strong> Set the zoom expression. Default is 1.</li>
<li><strong>x</strong> Set the x expression. Default is 0.</li>
<li><strong>y</strong> Set the y expression. Default is 0.</li>
<li><strong>d</strong> Set the duration expression in number of frames. This sets for how many number of frames effect will last
for single input image.</li>
<li><strong>s</strong> Set the output image size, default is <code class="docutils literal"><span class="pre">hd720</span></code>.</li>
<li><strong>fps</strong> Set the output frame rate, default is 25.</li>
<li><strong>z</strong> Alias for <code class="docutils literal"><span class="pre">zoom</span></code>.</li>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>zoom</strong> Set the zoom expression. Default is 1.</p></li>
<li><p><strong>x</strong> Set the x expression. Default is 0.</p></li>
<li><p><strong>y</strong> Set the y expression. Default is 0.</p></li>
<li><p><strong>d</strong> Set the duration expression in number of frames. This sets for how many number of frames effect will last
for single input image.</p></li>
<li><p><strong>s</strong> Set the output image size, default is <code class="docutils literal notranslate"><span class="pre">hd720</span></code>.</p></li>
<li><p><strong>fps</strong> Set the output frame rate, default is 25.</p></li>
<li><p><strong>z</strong> Alias for <code class="docutils literal notranslate"><span class="pre">zoom</span></code>.</p></li>
</ul>
</td>
</tr>
</tbody>
</table>
</dd>
</dl>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg-filters.html#zoompan">zoompan</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.input">
<code class="descclassname">ffmpeg.</code><code class="descname">input</code><span class="sig-paren">(</span><em>filename</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.input" title="Permalink to this definition"></a></dt>
<dd><p>Input file URL (ffmpeg <code class="docutils literal"><span class="pre">-i</span></code> option)</p>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg.html#Main-options">Main options</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.merge_outputs">
<code class="descclassname">ffmpeg.</code><code class="descname">merge_outputs</code><span class="sig-paren">(</span><em>*parent_nodes</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.merge_outputs" title="Permalink to this definition"></a></dt>
<dd></dd></dl>
<dl class="function">
<dt id="ffmpeg.output">
<code class="descclassname">ffmpeg.</code><code class="descname">output</code><span class="sig-paren">(</span><em>parent_node</em>, <em>filename</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.output" title="Permalink to this definition"></a></dt>
<dd><p>Output file URL</p>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg.html#Synopsis">Synopsis</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.overwrite_output">
<code class="descclassname">ffmpeg.</code><code class="descname">overwrite_output</code><span class="sig-paren">(</span><em>parent_node</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.overwrite_output" title="Permalink to this definition"></a></dt>
<dd><p>Overwrite output files without asking (ffmpeg <code class="docutils literal"><span class="pre">-y</span></code> option)</p>
<p>Official documentation: <a class="reference external" href="https://ffmpeg.org/ffmpeg.html#Main-options">Main options</a></p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.get_args">
<code class="descclassname">ffmpeg.</code><code class="descname">get_args</code><span class="sig-paren">(</span><em>node</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.get_args" title="Permalink to this definition"></a></dt>
<dd><p>Get command-line arguments for ffmpeg.</p>
</dd></dl>
<dl class="function">
<dt id="ffmpeg.run">
<code class="descclassname">ffmpeg.</code><code class="descname">run</code><span class="sig-paren">(</span><em>node</em>, <em>cmd=ffmpeg</em><span class="sig-paren">)</span><a class="headerlink" href="#ffmpeg.run" title="Permalink to this definition"></a></dt>
<dd><p>Run ffmpeg on node graph.</p>
</dd></dl>
</div>
<div class="section" id="indices-and-tables">
<h1>Indices and tables<a class="headerlink" href="#indices-and-tables" title="Permalink to this headline"></a></h1>
<ul class="simple">
<li><a class="reference internal" href="genindex.html"><span class="std std-ref">Index</span></a></li>
<li><a class="reference internal" href="py-modindex.html"><span class="std std-ref">Module Index</span></a></li>
<li><a class="reference internal" href="search.html"><span class="std std-ref">Search Page</span></a></li>
<li><p><a class="reference internal" href="genindex.html"><span class="std std-ref">Index</span></a></p></li>
<li><p><a class="reference internal" href="py-modindex.html"><span class="std std-ref">Module Index</span></a></p></li>
<li><p><a class="reference internal" href="search.html"><span class="std std-ref">Search Page</span></a></p></li>
</ul>
</div>
@ -393,7 +715,7 @@ for single input image.</li>
</div>
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper">
<h3><a href="#">Table Of Contents</a></h3>
<h3><a href="#">Table of Contents</a></h3>
<ul>
<li><a class="reference internal" href="#">ffmpeg-python: Python bindings for FFmpeg</a></li>
<li><a class="reference internal" href="#indices-and-tables">Indices and tables</a></li>
@ -407,13 +729,13 @@ for single input image.</li>
</ul>
</div>
<div id="searchbox" style="display: none" role="search">
<h3>Quick search</h3>
<h3 id="searchlabel">Quick search</h3>
<div class="searchformwrapper">
<form class="search" action="search.html" method="get">
<div><input type="text" name="q" /></div>
<div><input type="submit" value="Go" /></div>
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
<input type="text" name="q" aria-labelledby="searchlabel" />
<input type="submit" value="Go" />
</form>
</div>
</div>
<script type="text/javascript">$('#searchbox').show(0);</script>
</div>
@ -434,7 +756,7 @@ for single input image.</li>
</div>
<div class="footer" role="contentinfo">
&#169; Copyright 2017, Karl Kroening.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.6.1.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 2.1.0.
</div>
</body>
</html>

Binary file not shown.

View File

@ -1,29 +1,17 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta charset="utf-8" />
<title>Python Module Index &#8212; ffmpeg-python documentation</title>
<link rel="stylesheet" href="_static/nature.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: './',
VERSION: '',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: '.txt'
};
</script>
<script type="text/javascript" id="documentation_options" data-url_root="./" src="_static/documentation_options.js"></script>
<script type="text/javascript" src="_static/jquery.js"></script>
<script type="text/javascript" src="_static/underscore.js"></script>
<script type="text/javascript" src="_static/doctools.js"></script>
<script type="text/javascript" src="_static/language_data.js"></script>
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
@ -33,8 +21,7 @@
</script>
</head>
<body>
</head><body>
<div class="related" role="navigation" aria-label="related navigation">
<h3>Navigation</h3>
<ul>
@ -78,13 +65,13 @@
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper">
<div id="searchbox" style="display: none" role="search">
<h3>Quick search</h3>
<h3 id="searchlabel">Quick search</h3>
<div class="searchformwrapper">
<form class="search" action="search.html" method="get">
<div><input type="text" name="q" /></div>
<div><input type="submit" value="Go" /></div>
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
<input type="text" name="q" aria-labelledby="searchlabel" />
<input type="submit" value="Go" />
</form>
</div>
</div>
<script type="text/javascript">$('#searchbox').show(0);</script>
</div>
@ -105,7 +92,7 @@
</div>
<div class="footer" role="contentinfo">
&#169; Copyright 2017, Karl Kroening.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.6.1.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 2.1.0.
</div>
</body>
</html>

View File

@ -1,41 +1,25 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta charset="utf-8" />
<title>Search &#8212; ffmpeg-python documentation</title>
<link rel="stylesheet" href="_static/nature.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: './',
VERSION: '',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: '.txt'
};
</script>
<script type="text/javascript" id="documentation_options" data-url_root="./" src="_static/documentation_options.js"></script>
<script type="text/javascript" src="_static/jquery.js"></script>
<script type="text/javascript" src="_static/underscore.js"></script>
<script type="text/javascript" src="_static/doctools.js"></script>
<script type="text/javascript" src="_static/language_data.js"></script>
<script type="text/javascript" src="_static/searchtools.js"></script>
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="#" />
<script type="text/javascript">
jQuery(function() { Search.loadIndex("searchindex.js"); });
</script>
<script type="text/javascript" id="searchindexloader"></script>
<script type="text/javascript" src="searchindex.js" defer></script>
</head>
<body>
</head><body>
<div class="related" role="navigation" aria-label="related navigation">
<h3>Navigation</h3>
<ul>
@ -101,7 +85,7 @@
</div>
<div class="footer" role="contentinfo">
&#169; Copyright 2017, Karl Kroening.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.6.1.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 2.1.0.
</div>
</body>
</html>

File diff suppressed because one or more lines are too long

BIN
doc/jupyter-demo.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 912 KiB

BIN
doc/jupyter-screenshot.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 461 KiB

BIN
doc/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

BIN
doc/logo.xcf Normal file

Binary file not shown.

263
examples/README.md Normal file
View File

@ -0,0 +1,263 @@
# Examples
## [Get video info (ffprobe)](https://github.com/kkroening/ffmpeg-python/blob/master/examples/video_info.py#L15)
```python
probe = ffmpeg.probe(args.in_filename)
video_stream = next((stream for stream in probe['streams'] if stream['codec_type'] == 'video'), None)
width = int(video_stream['width'])
height = int(video_stream['height'])
```
## [Generate thumbnail for video](https://github.com/kkroening/ffmpeg-python/blob/master/examples/get_video_thumbnail.py#L21)
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/examples/graphs/get_video_thumbnail.png" alt="get-video-thumbnail graph" width="30%" />
```python
(
ffmpeg
.input(in_filename, ss=time)
.filter('scale', width, -1)
.output(out_filename, vframes=1)
.run()
)
```
## [Convert video to numpy array](https://github.com/kkroening/ffmpeg-python/blob/master/examples/ffmpeg-numpy.ipynb)
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/examples/graphs/ffmpeg-numpy.png" alt="ffmpeg-numpy graph" width="20%" />
```python
out, _ = (
ffmpeg
.input('in.mp4')
.output('pipe:', format='rawvideo', pix_fmt='rgb24')
.run(capture_stdout=True)
)
video = (
np
.frombuffer(out, np.uint8)
.reshape([-1, height, width, 3])
)
```
## [Read single video frame as jpeg through pipe](https://github.com/kkroening/ffmpeg-python/blob/master/examples/read_frame_as_jpeg.py#L16)
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/examples/graphs/read_frame_as_jpeg.png" alt="read-frame-as-jpeg graph" width="30%" />
```python
out, _ = (
ffmpeg
.input(in_filename)
.filter('select', 'gte(n,{})'.format(frame_num))
.output('pipe:', vframes=1, format='image2', vcodec='mjpeg')
.run(capture_stdout=True)
)
```
## [Convert sound to raw PCM audio](https://github.com/kkroening/ffmpeg-python/blob/master/examples/transcribe.py#L23)
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/examples/graphs/transcribe.png" alt="transcribe graph" width="30%" />
```python
out, _ = (ffmpeg
.input(in_filename, **input_kwargs)
.output('-', format='s16le', acodec='pcm_s16le', ac=1, ar='16k')
.overwrite_output()
.run(capture_stdout=True)
)
```
## Assemble video from sequence of frames
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/examples/graphs/glob.png" alt="glob" width="25%" />
```python
(
ffmpeg
.input('/path/to/jpegs/*.jpg', pattern_type='glob', framerate=25)
.output('movie.mp4')
.run()
)
```
With additional filtering:
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/examples/graphs/glob-filter.png" alt="glob-filter" width="50%" />
```python
(
ffmpeg
.input('/path/to/jpegs/*.jpg', pattern_type='glob', framerate=25)
.filter('deflicker', mode='pm', size=10)
.filter('scale', size='hd1080', force_original_aspect_ratio='increase')
.output('movie.mp4', crf=20, preset='slower', movflags='faststart', pix_fmt='yuv420p')
.view(filename='filter_graph')
.run()
)
```
## Audio/video pipeline
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/examples/graphs/av-pipeline.png" alt="av-pipeline graph" width="80%" />
```python
in1 = ffmpeg.input('in1.mp4')
in2 = ffmpeg.input('in2.mp4')
v1 = in1.video.hflip()
a1 = in1.audio
v2 = in2.video.filter('reverse').filter('hue', s=0)
a2 = in2.audio.filter('areverse').filter('aphaser')
joined = ffmpeg.concat(v1, a1, v2, a2, v=1, a=1).node
v3 = joined[0]
a3 = joined[1].filter('volume', 0.8)
out = ffmpeg.output(v3, a3, 'out.mp4')
out.run()
```
## Mono to stereo with offsets and video
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/examples/graphs/mono-to-stereo.png" alt="mono-to-stereo graph" width="80%" />
```python
audio_left = (
ffmpeg
.input('audio-left.wav')
.filter('atrim', start=5)
.filter('asetpts', 'PTS-STARTPTS')
)
audio_right = (
ffmpeg
.input('audio-right.wav')
.filter('atrim', start=10)
.filter('asetpts', 'PTS-STARTPTS')
)
input_video = ffmpeg.input('input-video.mp4')
(
ffmpeg
.filter((audio_left, audio_right), 'join', inputs=2, channel_layout='stereo')
.output(input_video.video, 'output-video.mp4', shortest=None, vcodec='copy')
.overwrite_output()
.run()
)
```
## [Jupyter Frame Viewer](https://github.com/kkroening/ffmpeg-python/blob/master/examples/ffmpeg-numpy.ipynb)
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/doc/jupyter-screenshot.png" alt="jupyter screenshot" width="75%" />
## [Jupyter Stream Editor](https://github.com/kkroening/ffmpeg-python/blob/master/examples/ffmpeg-numpy.ipynb)
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/doc/jupyter-demo.gif" alt="jupyter demo" width="75%" />
## [Tensorflow Streaming](https://github.com/kkroening/ffmpeg-python/blob/master/examples/tensorflow_stream.py)
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/examples/graphs/tensorflow-stream.png" alt="tensorflow streaming; challenge mode: combine this with the webcam example below" width="55%" />
- Decode input video with ffmpeg
- Process video with tensorflow using "deep dream" example
- Encode output video with ffmpeg
```python
process1 = (
ffmpeg
.input(in_filename)
.output('pipe:', format='rawvideo', pix_fmt='rgb24', vframes=8)
.run_async(pipe_stdout=True)
)
process2 = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(width, height))
.output(out_filename, pix_fmt='yuv420p')
.overwrite_output()
.run_async(pipe_stdin=True)
)
while True:
in_bytes = process1.stdout.read(width * height * 3)
if not in_bytes:
break
in_frame = (
np
.frombuffer(in_bytes, np.uint8)
.reshape([height, width, 3])
)
# See examples/tensorflow_stream.py:
out_frame = deep_dream.process_frame(in_frame)
process2.stdin.write(
out_frame
.astype(np.uint8)
.tobytes()
)
process2.stdin.close()
process1.wait()
process2.wait()
```
<img src="https://raw.githubusercontent.com/kkroening/ffmpeg-python/master/examples/graphs/dream.png" alt="deep dream streaming" width="40%" />
## [FaceTime webcam input (OS X)](https://github.com/kkroening/ffmpeg-python/blob/master/examples/facetime.py)
```python
(
ffmpeg
.input('FaceTime', format='avfoundation', pix_fmt='uyvy422', framerate=30)
.output('out.mp4', pix_fmt='yuv420p', vframes=100)
.run()
)
```
## Stream from a local video to HTTP server
```python
video_format = "flv"
server_url = "http://127.0.0.1:8080"
process = (
ffmpeg
.input("input.mp4")
.output(
server_url,
codec = "copy", # use same codecs of the original video
listen=1, # enables HTTP server
f=video_format)
.global_args("-re") # argument to act as a live stream
.run()
)
```
to receive the video you can use ffplay in the terminal:
```
$ ffplay -f flv http://localhost:8080
```
## Stream from RTSP server to TCP socket
```python
packet_size = 4096
process = (
ffmpeg
.input('rtsp://%s:8554/default')
.output('-', format='h264')
.run_async(pipe_stdout=True)
)
while process.poll() is None:
packet = process.stdout.read(packet_size)
try:
tcp_socket.send(packet)
except socket.error:
process.stdout.close()
process.wait()
break
```

8
examples/facetime.py Normal file
View File

@ -0,0 +1,8 @@
import ffmpeg
(
ffmpeg
.input('FaceTime', format='avfoundation', pix_fmt='uyvy422', framerate=30)
.output('out.mp4', pix_fmt='yuv420p', vframes=100)
.run()
)

216
examples/ffmpeg-numpy.ipynb Normal file
View File

@ -0,0 +1,216 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from ipywidgets import interact\n",
"from matplotlib import pyplot as plt\n",
"import ffmpeg\n",
"import ipywidgets as widgets\n",
"import numpy as np"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"probe = ffmpeg.probe('in.mp4')\n",
"video_info = next(s for s in probe['streams'] if s['codec_type'] == 'video')\n",
"width = int(video_info['width'])\n",
"height = int(video_info['height'])\n",
"num_frames = int(video_info['nb_frames'])"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "5f63dc164956464c994ec58d86ee7cd9",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"interactive(children=(IntSlider(value=0, description='frame', max=209), Output()), _dom_classes=('widget-inter…"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"out, err = (\n",
" ffmpeg\n",
" .input('in.mp4')\n",
" .output('pipe:', format='rawvideo', pix_fmt='rgb24')\n",
" .run(capture_stdout=True)\n",
")\n",
"video = (\n",
" np\n",
" .frombuffer(out, np.uint8)\n",
" .reshape([-1, height, width, 3])\n",
")\n",
"\n",
"@interact(frame=(0, num_frames))\n",
"def show_frame(frame=0):\n",
" plt.imshow(video[frame,:,:,:])"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "84bcac52195f47f8854f09acd7666b84",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"interactive(children=(Checkbox(value=True, description='enable_overlay'), Checkbox(value=True, description='en…"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from io import BytesIO\n",
"from PIL import Image\n",
"\n",
"\n",
"def extract_frame(stream, frame_num):\n",
" while isinstance(stream, ffmpeg.nodes.OutputStream):\n",
" stream = stream.node.incoming_edges[0].upstream_node.stream()\n",
" out, _ = (\n",
" stream\n",
" .filter_('select', 'gte(n,{})'.format(frame_num))\n",
" .output('pipe:', format='rawvideo', pix_fmt='rgb24', vframes=1)\n",
" .run(capture_stdout=True, capture_stderr=True)\n",
" )\n",
" return np.frombuffer(out, np.uint8).reshape([height, width, 3])\n",
"\n",
"\n",
"def png_to_np(png_bytes):\n",
" buffer = BytesIO(png_bytes)\n",
" pil_image = Image.open(buffer)\n",
" return np.array(pil_image)\n",
" \n",
"\n",
"def build_graph(\n",
" enable_overlay, flip_overlay, enable_box, box_x, box_y,\n",
" thickness, color):\n",
"\n",
" stream = ffmpeg.input('in.mp4')\n",
"\n",
" if enable_overlay:\n",
" overlay = ffmpeg.input('overlay.png')\n",
" if flip_overlay:\n",
" overlay = overlay.hflip()\n",
" stream = stream.overlay(overlay)\n",
"\n",
" if enable_box:\n",
" stream = stream.drawbox(\n",
" box_x, box_y, 120, 120, color=color, t=thickness)\n",
"\n",
" return stream.output('out.mp4')\n",
"\n",
"\n",
"def show_image(ax, stream, frame_num):\n",
" try:\n",
" image = extract_frame(stream, frame_num)\n",
" ax.imshow(image)\n",
" ax.axis('off')\n",
" except ffmpeg.Error as e:\n",
" print(e.stderr.decode())\n",
"\n",
"\n",
"def show_graph(ax, stream, detail):\n",
" data = ffmpeg.view(stream, detail=detail, pipe=True)\n",
" image = png_to_np(data)\n",
" ax.imshow(image, aspect='equal', interpolation='hanning')\n",
" ax.set_xlim(0, 1100)\n",
" ax.axis('off')\n",
"\n",
"\n",
"@interact(\n",
" frame_num=(0, num_frames),\n",
" box_x=(0, 200),\n",
" box_y=(0, 200),\n",
" thickness=(1, 40),\n",
" color=['red', 'green', 'magenta', 'blue'],\n",
")\n",
"def f(\n",
" enable_overlay=True,\n",
" enable_box=True,\n",
" flip_overlay=True,\n",
" graph_detail=False,\n",
" frame_num=0,\n",
" box_x=50,\n",
" box_y=50,\n",
" thickness=5,\n",
" color='red'):\n",
"\n",
" stream = build_graph(\n",
" enable_overlay,\n",
" flip_overlay,\n",
" enable_box,\n",
" box_x,\n",
" box_y,\n",
" thickness,\n",
" color\n",
" )\n",
"\n",
" fig, (ax0, ax1) = plt.subplots(1, 2, figsize=(15,4))\n",
" plt.tight_layout()\n",
" show_image(ax0, stream, frame_num)\n",
" show_graph(ax1, stream, graph_detail)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

35
examples/get_video_thumbnail.py Executable file
View File

@ -0,0 +1,35 @@
#!/usr/bin/env python
from __future__ import unicode_literals, print_function
import argparse
import ffmpeg
import sys
parser = argparse.ArgumentParser(description='Generate video thumbnail')
parser.add_argument('in_filename', help='Input filename')
parser.add_argument('out_filename', help='Output filename')
parser.add_argument(
'--time', type=int, default=0.1, help='Time offset')
parser.add_argument(
'--width', type=int, default=120,
help='Width of output thumbnail (height automatically determined by aspect ratio)')
def generate_thumbnail(in_filename, out_filename, time, width):
try:
(
ffmpeg
.input(in_filename, ss=time)
.filter('scale', width, -1)
.output(out_filename, vframes=1)
.overwrite_output()
.run(capture_stdout=True, capture_stderr=True)
)
except ffmpeg.Error as e:
print(e.stderr.decode(), file=sys.stderr)
sys.exit(1)
if __name__ == '__main__':
args = parser.parse_args()
generate_thumbnail(args.in_filename, args.out_filename, args.time, args.width)

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

BIN
examples/graphs/dream.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 700 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.2 KiB

BIN
examples/graphs/glob.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
examples/overlay.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 KiB

28
examples/read_frame_as_jpeg.py Executable file
View File

@ -0,0 +1,28 @@
#!/usr/bin/env python
from __future__ import unicode_literals
import argparse
import ffmpeg
import sys
parser = argparse.ArgumentParser(
description='Read individual video frame into memory as jpeg and write to stdout')
parser.add_argument('in_filename', help='Input filename')
parser.add_argument('frame_num', help='Frame number')
def read_frame_as_jpeg(in_filename, frame_num):
out, err = (
ffmpeg
.input(in_filename)
.filter('select', 'gte(n,{})'.format(frame_num))
.output('pipe:', vframes=1, format='image2', vcodec='mjpeg')
.run(capture_stdout=True)
)
return out
if __name__ == '__main__':
args = parser.parse_args()
out = read_frame_as_jpeg(args.in_filename, args.frame_num)
sys.stdout.buffer.write(out)

View File

@ -0,0 +1,9 @@
ffmpeg-python
gevent
google-cloud-speech
graphviz
ipywidgets
jupyter
matplotlib
Pillow
tqdm

130
examples/show_progress.py Executable file
View File

@ -0,0 +1,130 @@
#!/usr/bin/env python
from __future__ import unicode_literals, print_function
from tqdm import tqdm
import argparse
import contextlib
import ffmpeg
import gevent
import gevent.monkey; gevent.monkey.patch_all(thread=False)
import os
import shutil
import socket
import sys
import tempfile
import textwrap
parser = argparse.ArgumentParser(description=textwrap.dedent('''\
Process video and report and show progress bar.
This is an example of using the ffmpeg `-progress` option with a
unix-domain socket to report progress in the form of a progress
bar.
The video processing simply consists of converting the video to
sepia colors, but the same pattern can be applied to other use
cases.
'''))
parser.add_argument('in_filename', help='Input filename')
parser.add_argument('out_filename', help='Output filename')
@contextlib.contextmanager
def _tmpdir_scope():
tmpdir = tempfile.mkdtemp()
try:
yield tmpdir
finally:
shutil.rmtree(tmpdir)
def _do_watch_progress(filename, sock, handler):
"""Function to run in a separate gevent greenlet to read progress
events from a unix-domain socket."""
connection, client_address = sock.accept()
data = b''
try:
while True:
more_data = connection.recv(16)
if not more_data:
break
data += more_data
lines = data.split(b'\n')
for line in lines[:-1]:
line = line.decode()
parts = line.split('=')
key = parts[0] if len(parts) > 0 else None
value = parts[1] if len(parts) > 1 else None
handler(key, value)
data = lines[-1]
finally:
connection.close()
@contextlib.contextmanager
def _watch_progress(handler):
"""Context manager for creating a unix-domain socket and listen for
ffmpeg progress events.
The socket filename is yielded from the context manager and the
socket is closed when the context manager is exited.
Args:
handler: a function to be called when progress events are
received; receives a ``key`` argument and ``value``
argument. (The example ``show_progress`` below uses tqdm)
Yields:
socket_filename: the name of the socket file.
"""
with _tmpdir_scope() as tmpdir:
socket_filename = os.path.join(tmpdir, 'sock')
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
with contextlib.closing(sock):
sock.bind(socket_filename)
sock.listen(1)
child = gevent.spawn(_do_watch_progress, socket_filename, sock, handler)
try:
yield socket_filename
except:
gevent.kill(child)
raise
@contextlib.contextmanager
def show_progress(total_duration):
"""Create a unix-domain socket to watch progress and render tqdm
progress bar."""
with tqdm(total=round(total_duration, 2)) as bar:
def handler(key, value):
if key == 'out_time_ms':
time = round(float(value) / 1000000., 2)
bar.update(time - bar.n)
elif key == 'progress' and value == 'end':
bar.update(bar.total - bar.n)
with _watch_progress(handler) as socket_filename:
yield socket_filename
if __name__ == '__main__':
args = parser.parse_args()
total_duration = float(ffmpeg.probe(args.in_filename)['format']['duration'])
with show_progress(total_duration) as socket_filename:
# See https://ffmpeg.org/ffmpeg-filters.html#Examples-44
sepia_values = [.393, .769, .189, 0, .349, .686, .168, 0, .272, .534, .131]
try:
(ffmpeg
.input(args.in_filename)
.colorchannelmixer(*sepia_values)
.output(args.out_filename)
.global_args('-progress', 'unix://{}'.format(socket_filename))
.overwrite_output()
.run(capture_stdout=True, capture_stderr=True)
)
except ffmpeg.Error as e:
print(e.stderr, file=sys.stderr)
sys.exit(1)

141
examples/split_silence.py Executable file
View File

@ -0,0 +1,141 @@
#!/usr/bin/env python
from __future__ import unicode_literals
import argparse
import errno
import ffmpeg
import logging
import os
import re
import subprocess
import sys
logging.basicConfig(level=logging.INFO, format='%(message)s')
logger = logging.getLogger(__file__)
logger.setLevel(logging.INFO)
DEFAULT_DURATION = 0.3
DEFAULT_THRESHOLD = -60
parser = argparse.ArgumentParser(description='Split media into separate chunks wherever silence occurs')
parser.add_argument('in_filename', help='Input filename (`-` for stdin)')
parser.add_argument('out_pattern', help='Output filename pattern (e.g. `out/chunk_{:04d}.wav`)')
parser.add_argument('--silence-threshold', default=DEFAULT_THRESHOLD, type=int, help='Silence threshold (in dB)')
parser.add_argument('--silence-duration', default=DEFAULT_DURATION, type=float, help='Silence duration')
parser.add_argument('--start-time', type=float, help='Start time (seconds)')
parser.add_argument('--end-time', type=float, help='End time (seconds)')
parser.add_argument('-v', dest='verbose', action='store_true', help='Verbose mode')
silence_start_re = re.compile(r' silence_start: (?P<start>[0-9]+(\.?[0-9]*))$')
silence_end_re = re.compile(r' silence_end: (?P<end>[0-9]+(\.?[0-9]*)) ')
total_duration_re = re.compile(
r'size=[^ ]+ time=(?P<hours>[0-9]{2}):(?P<minutes>[0-9]{2}):(?P<seconds>[0-9\.]{5}) bitrate=')
def _logged_popen(cmd_line, *args, **kwargs):
logger.debug('Running command: {}'.format(subprocess.list2cmdline(cmd_line)))
return subprocess.Popen(cmd_line, *args, **kwargs)
def get_chunk_times(in_filename, silence_threshold, silence_duration, start_time=None, end_time=None):
input_kwargs = {}
if start_time is not None:
input_kwargs['ss'] = start_time
else:
start_time = 0.
if end_time is not None:
input_kwargs['t'] = end_time - start_time
p = _logged_popen(
(ffmpeg
.input(in_filename, **input_kwargs)
.filter('silencedetect', n='{}dB'.format(silence_threshold), d=silence_duration)
.output('-', format='null')
.compile()
) + ['-nostats'], # FIXME: use .nostats() once it's implemented in ffmpeg-python.
stderr=subprocess.PIPE
)
output = p.communicate()[1].decode('utf-8')
if p.returncode != 0:
sys.stderr.write(output)
sys.exit(1)
logger.debug(output)
lines = output.splitlines()
# Chunks start when silence ends, and chunks end when silence starts.
chunk_starts = []
chunk_ends = []
for line in lines:
silence_start_match = silence_start_re.search(line)
silence_end_match = silence_end_re.search(line)
total_duration_match = total_duration_re.search(line)
if silence_start_match:
chunk_ends.append(float(silence_start_match.group('start')))
if len(chunk_starts) == 0:
# Started with non-silence.
chunk_starts.append(start_time or 0.)
elif silence_end_match:
chunk_starts.append(float(silence_end_match.group('end')))
elif total_duration_match:
hours = int(total_duration_match.group('hours'))
minutes = int(total_duration_match.group('minutes'))
seconds = float(total_duration_match.group('seconds'))
end_time = hours * 3600 + minutes * 60 + seconds
if len(chunk_starts) == 0:
# No silence found.
chunk_starts.append(start_time)
if len(chunk_starts) > len(chunk_ends):
# Finished with non-silence.
chunk_ends.append(end_time or 10000000.)
return list(zip(chunk_starts, chunk_ends))
def _makedirs(path):
"""Python2-compatible version of ``os.makedirs(path, exist_ok=True)``."""
try:
os.makedirs(path)
except OSError as exc:
if exc.errno != errno.EEXIST or not os.path.isdir(path):
raise
def split_audio(
in_filename,
out_pattern,
silence_threshold=DEFAULT_THRESHOLD,
silence_duration=DEFAULT_DURATION,
start_time=None,
end_time=None,
verbose=False,
):
chunk_times = get_chunk_times(in_filename, silence_threshold, silence_duration, start_time, end_time)
for i, (start_time, end_time) in enumerate(chunk_times):
time = end_time - start_time
out_filename = out_pattern.format(i, i=i)
_makedirs(os.path.dirname(out_filename))
logger.info('{}: start={:.02f}, end={:.02f}, duration={:.02f}'.format(out_filename, start_time, end_time,
time))
_logged_popen(
(ffmpeg
.input(in_filename, ss=start_time, t=time)
.output(out_filename)
.overwrite_output()
.compile()
),
stdout=subprocess.PIPE if not verbose else None,
stderr=subprocess.PIPE if not verbose else None,
).communicate()
if __name__ == '__main__':
kwargs = vars(parser.parse_args())
if kwargs['verbose']:
logging.basicConfig(level=logging.DEBUG, format='%(levels): %(message)s')
logger.setLevel(logging.DEBUG)
split_audio(**kwargs)

View File

@ -0,0 +1,248 @@
'''Example streaming ffmpeg numpy processing.
Demonstrates using ffmpeg to decode video input, process the frames in
python, and then encode video output using ffmpeg.
This example uses two ffmpeg processes - one to decode the input video
and one to encode an output video - while the raw frame processing is
done in python with numpy.
At a high level, the signal graph looks like this:
(input video) -> [ffmpeg process 1] -> [python] -> [ffmpeg process 2] -> (output video)
This example reads/writes video files on the local filesystem, but the
same pattern can be used for other kinds of input/output (e.g. webcam,
rtmp, etc.).
The simplest processing example simply darkens each frame by
multiplying the frame's numpy array by a constant value; see
``process_frame_simple``.
A more sophisticated example processes each frame with tensorflow using
the "deep dream" tensorflow tutorial; activate this mode by calling
the script with the optional `--dream` argument. (Make sure tensorflow
is installed before running)
'''
from __future__ import print_function
import argparse
import ffmpeg
import logging
import numpy as np
import os
import subprocess
import zipfile
parser = argparse.ArgumentParser(description='Example streaming ffmpeg numpy processing')
parser.add_argument('in_filename', help='Input filename')
parser.add_argument('out_filename', help='Output filename')
parser.add_argument(
'--dream', action='store_true', help='Use DeepDream frame processing (requires tensorflow)')
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
def get_video_size(filename):
logger.info('Getting video size for {!r}'.format(filename))
probe = ffmpeg.probe(filename)
video_info = next(s for s in probe['streams'] if s['codec_type'] == 'video')
width = int(video_info['width'])
height = int(video_info['height'])
return width, height
def start_ffmpeg_process1(in_filename):
logger.info('Starting ffmpeg process1')
args = (
ffmpeg
.input(in_filename)
.output('pipe:', format='rawvideo', pix_fmt='rgb24')
.compile()
)
return subprocess.Popen(args, stdout=subprocess.PIPE)
def start_ffmpeg_process2(out_filename, width, height):
logger.info('Starting ffmpeg process2')
args = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(width, height))
.output(out_filename, pix_fmt='yuv420p')
.overwrite_output()
.compile()
)
return subprocess.Popen(args, stdin=subprocess.PIPE)
def read_frame(process1, width, height):
logger.debug('Reading frame')
# Note: RGB24 == 3 bytes per pixel.
frame_size = width * height * 3
in_bytes = process1.stdout.read(frame_size)
if len(in_bytes) == 0:
frame = None
else:
assert len(in_bytes) == frame_size
frame = (
np
.frombuffer(in_bytes, np.uint8)
.reshape([height, width, 3])
)
return frame
def process_frame_simple(frame):
'''Simple processing example: darken frame.'''
return frame * 0.3
def write_frame(process2, frame):
logger.debug('Writing frame')
process2.stdin.write(
frame
.astype(np.uint8)
.tobytes()
)
def run(in_filename, out_filename, process_frame):
width, height = get_video_size(in_filename)
process1 = start_ffmpeg_process1(in_filename)
process2 = start_ffmpeg_process2(out_filename, width, height)
while True:
in_frame = read_frame(process1, width, height)
if in_frame is None:
logger.info('End of input stream')
break
logger.debug('Processing frame')
out_frame = process_frame(in_frame)
write_frame(process2, out_frame)
logger.info('Waiting for ffmpeg process1')
process1.wait()
logger.info('Waiting for ffmpeg process2')
process2.stdin.close()
process2.wait()
logger.info('Done')
class DeepDream(object):
'''DeepDream implementation, adapted from official tensorflow deepdream tutorial:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/tutorials/deepdream
Credit: Alexander Mordvintsev
'''
_DOWNLOAD_URL = 'https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip'
_ZIP_FILENAME = 'deepdream_model.zip'
_MODEL_FILENAME = 'tensorflow_inception_graph.pb'
@staticmethod
def _download_model():
logger.info('Downloading deepdream model...')
try:
from urllib.request import urlretrieve # python 3
except ImportError:
from urllib import urlretrieve # python 2
urlretrieve(DeepDream._DOWNLOAD_URL, DeepDream._ZIP_FILENAME)
logger.info('Extracting deepdream model...')
zipfile.ZipFile(DeepDream._ZIP_FILENAME, 'r').extractall('.')
@staticmethod
def _tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See `_resize` function below.
'''
placeholders = list(map(tf.placeholder, argtypes))
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
@staticmethod
def _base_resize(img, size):
'''Helper function that uses TF to resize an image'''
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
def __init__(self):
if not os.path.exists(DeepDream._MODEL_FILENAME):
self._download_model()
self._graph = tf.Graph()
self._session = tf.InteractiveSession(graph=self._graph)
self._resize = self._tffunc(np.float32, np.int32)(self._base_resize)
with tf.gfile.FastGFile(DeepDream._MODEL_FILENAME, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
self._t_input = tf.placeholder(np.float32, name='input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(self._t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
self.t_obj = self.T('mixed4d_3x3_bottleneck_pre_relu')[:,:,:,139]
#self.t_obj = tf.square(self.T('mixed4c'))
def T(self, layer_name):
'''Helper for getting layer output tensor'''
return self._graph.get_tensor_by_name('import/%s:0'%layer_name)
def _calc_grad_tiled(self, img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in range(0, max(h-sz//2, sz),sz):
for x in range(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = self._session.run(t_grad, {self._t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def process_frame(self, frame, iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):
t_score = tf.reduce_mean(self.t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, self._t_input)[0] # behold the power of automatic differentiation!
# split the image into a number of octaves
img = frame
octaves = []
for i in range(octave_n-1):
hw = img.shape[:2]
lo = self._resize(img, np.int32(np.float32(hw)/octave_scale))
hi = img-self._resize(lo, hw)
img = lo
octaves.append(hi)
# generate details octave by octave
for octave in range(octave_n):
if octave>0:
hi = octaves[-octave]
img = self._resize(img, hi.shape[:2])+hi
for i in range(iter_n):
g = self._calc_grad_tiled(img, t_grad)
img += g*(step / (np.abs(g).mean()+1e-7))
#print('.',end = ' ')
return img
if __name__ == '__main__':
args = parser.parse_args()
if args.dream:
import tensorflow as tf
process_frame = DeepDream().process_frame
else:
process_frame = process_frame_simple
run(args.in_filename, args.out_filename, process_frame)

56
examples/transcribe.py Executable file
View File

@ -0,0 +1,56 @@
#!/usr/bin/env python
from __future__ import unicode_literals, print_function
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
import argparse
import ffmpeg
import logging
import sys
logging.basicConfig(level=logging.INFO, format='%(message)s')
logger = logging.getLogger(__file__)
logger.setLevel(logging.INFO)
parser = argparse.ArgumentParser(description='Convert speech audio to text using Google Speech API')
parser.add_argument('in_filename', help='Input filename (`-` for stdin)')
def decode_audio(in_filename, **input_kwargs):
try:
out, err = (ffmpeg
.input(in_filename, **input_kwargs)
.output('-', format='s16le', acodec='pcm_s16le', ac=1, ar='16k')
.overwrite_output()
.run(capture_stdout=True, capture_stderr=True)
)
except ffmpeg.Error as e:
print(e.stderr, file=sys.stderr)
sys.exit(1)
return out
def get_transcripts(audio_data):
client = speech.SpeechClient()
audio = types.RecognitionAudio(content=audio_data)
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=16000,
language_code='en-US'
)
response = client.recognize(config, audio)
return [result.alternatives[0].transcript for result in response.results]
def transcribe(in_filename):
audio_data = decode_audio(in_filename)
transcripts = get_transcripts(audio_data)
for transcript in transcripts:
print(repr(transcript.encode('utf-8')))
if __name__ == '__main__':
args = parser.parse_args()
transcribe(args.in_filename)

31
examples/video_info.py Executable file
View File

@ -0,0 +1,31 @@
#!/usr/bin/env python
from __future__ import unicode_literals, print_function
import argparse
import ffmpeg
import sys
parser = argparse.ArgumentParser(description='Get video information')
parser.add_argument('in_filename', help='Input filename')
if __name__ == '__main__':
args = parser.parse_args()
try:
probe = ffmpeg.probe(args.in_filename)
except ffmpeg.Error as e:
print(e.stderr, file=sys.stderr)
sys.exit(1)
video_stream = next((stream for stream in probe['streams'] if stream['codec_type'] == 'video'), None)
if video_stream is None:
print('No video stream found', file=sys.stderr)
sys.exit(1)
width = int(video_stream['width'])
height = int(video_stream['height'])
num_frames = int(video_stream['nb_frames'])
print('width: {}'.format(width))
print('height: {}'.format(height))
print('num_frames: {}'.format(num_frames))

View File

@ -1,6 +1,22 @@
from __future__ import unicode_literals
from . import _filters, _ffmpeg, _run
from ._filters import *
from . import nodes
from . import _ffmpeg
from . import _filters
from . import _probe
from . import _run
from . import _view
from .nodes import *
from ._ffmpeg import *
from ._filters import *
from ._probe import *
from ._run import *
__all__ = _filters.__all__ + _ffmpeg.__all__ + _run.__all__
from ._view import *
__all__ = (
nodes.__all__
+ _ffmpeg.__all__
+ _probe.__all__
+ _run.__all__
+ _view.__all__
+ _filters.__all__
)

View File

@ -1,58 +1,95 @@
from __future__ import unicode_literals
from past.builtins import basestring
from ._utils import basestring
from .nodes import (
FilterNode,
filter_operator,
GlobalNode,
InputNode,
operator,
MergeOutputsNode,
OutputNode,
output_operator,
)
def input(filename, **kwargs):
"""Input file URL (ffmpeg ``-i`` option)
Any supplied kwargs are passed to ffmpeg verbatim (e.g. ``t=20``,
``f='mp4'``, ``acodec='pcm'``, etc.).
To tell ffmpeg to read from stdin, use ``pipe:`` as the filename.
Official documentation: `Main options <https://ffmpeg.org/ffmpeg.html#Main-options>`__
"""
kwargs['filename'] = filename
fmt = kwargs.pop('f', None)
if fmt:
assert 'format' not in kwargs, "Can't specify both `format` and `f` kwargs"
if 'format' in kwargs:
raise ValueError("Can't specify both `format` and `f` kwargs")
kwargs['format'] = fmt
return InputNode(input.__name__, **kwargs)
return InputNode(input.__name__, kwargs=kwargs).stream()
@operator(node_classes={OutputNode, GlobalNode})
def overwrite_output(parent_node):
@output_operator()
def global_args(stream, *args):
"""Add extra global command-line argument(s), e.g. ``-progress``."""
return GlobalNode(stream, global_args.__name__, args).stream()
@output_operator()
def overwrite_output(stream):
"""Overwrite output files without asking (ffmpeg ``-y`` option)
Official documentation: `Main options <https://ffmpeg.org/ffmpeg.html#Main-options>`__
"""
return GlobalNode(parent_node, overwrite_output.__name__)
return GlobalNode(stream, overwrite_output.__name__, ['-y']).stream()
@operator(node_classes={OutputNode})
def merge_outputs(*parent_nodes):
return OutputNode(parent_nodes, merge_outputs.__name__)
@output_operator()
def merge_outputs(*streams):
"""Include all given outputs in one ffmpeg command line"""
return MergeOutputsNode(streams, merge_outputs.__name__).stream()
@operator(node_classes={InputNode, FilterNode})
def output(parent_node, filename, **kwargs):
@filter_operator()
def output(*streams_and_filename, **kwargs):
"""Output file URL
Syntax:
`ffmpeg.output(stream1[, stream2, stream3...], filename, **ffmpeg_args)`
Any supplied keyword arguments are passed to ffmpeg verbatim (e.g.
``t=20``, ``f='mp4'``, ``acodec='pcm'``, ``vcodec='rawvideo'``,
etc.). Some keyword-arguments are handled specially, as shown below.
Args:
video_bitrate: parameter for ``-b:v``, e.g. ``video_bitrate=1000``.
audio_bitrate: parameter for ``-b:a``, e.g. ``audio_bitrate=200``.
format: alias for ``-f`` parameter, e.g. ``format='mp4'``
(equivalent to ``f='mp4'``).
If multiple streams are provided, they are mapped to the same
output.
To tell ffmpeg to write to stdout, use ``pipe:`` as the filename.
Official documentation: `Synopsis <https://ffmpeg.org/ffmpeg.html#Synopsis>`__
"""
kwargs['filename'] = filename
streams_and_filename = list(streams_and_filename)
if 'filename' not in kwargs:
if not isinstance(streams_and_filename[-1], basestring):
raise ValueError('A filename must be provided')
kwargs['filename'] = streams_and_filename.pop(-1)
streams = streams_and_filename
fmt = kwargs.pop('f', None)
if fmt:
assert 'format' not in kwargs, "Can't specify both `format` and `f` kwargs"
if 'format' in kwargs:
raise ValueError("Can't specify both `format` and `f` kwargs")
kwargs['format'] = fmt
return OutputNode([parent_node], output.__name__, **kwargs)
return OutputNode(streams, output.__name__, kwargs=kwargs).stream()
__all__ = [
'input',
'merge_outputs',
'output',
'overwrite_output',
]
__all__ = ['input', 'merge_outputs', 'output', 'overwrite_output']

View File

@ -1,109 +1,127 @@
from __future__ import unicode_literals
from .nodes import (
FilterNode,
operator,
)
from .nodes import FilterNode, filter_operator
from ._utils import escape_chars
@operator()
def filter_(parent_node, filter_name, *args, **kwargs):
"""Apply custom single-source filter.
@filter_operator()
def filter_multi_output(stream_spec, filter_name, *args, **kwargs):
"""Apply custom filter with one or more outputs.
``filter_`` is normally used by higher-level filter functions such as ``hflip``, but if a filter implementation
is missing from ``fmpeg-python``, you can call ``filter_`` directly to have ``fmpeg-python`` pass the filter name
and arguments to ffmpeg verbatim.
This is the same as ``filter`` except that the filter can produce more than one
output.
To reference an output stream, use either the ``.stream`` operator or bracket
shorthand:
Example:
```
split = ffmpeg.input('in.mp4').filter_multi_output('split')
split0 = split.stream(0)
split1 = split[1]
ffmpeg.concat(split0, split1).output('out.mp4').run()
```
"""
return FilterNode(
stream_spec, filter_name, args=args, kwargs=kwargs, max_inputs=None
)
@filter_operator()
def filter(stream_spec, filter_name, *args, **kwargs):
"""Apply custom filter.
``filter_`` is normally used by higher-level filter functions such as ``hflip``,
but if a filter implementation is missing from ``ffmpeg-python``, you can call
``filter_`` directly to have ``ffmpeg-python`` pass the filter name and arguments
to ffmpeg verbatim.
Args:
parent_node: Source stream to apply filter to.
stream_spec: a Stream, list of Streams, or label-to-Stream dictionary mapping
filter_name: ffmpeg filter name, e.g. `colorchannelmixer`
*args: list of args to pass to ffmpeg verbatim
**kwargs: list of keyword-args to pass to ffmpeg verbatim
This function is used internally by all of the other single-source filters (e.g. ``hflip``, ``crop``, etc.).
For custom multi-source filters, see ``filter_multi`` instead.
The function name is suffixed with ``_`` in order avoid confusion with the standard python ``filter`` function.
The function name is suffixed with ``_`` in order avoid confusion with the standard
python ``filter`` function.
Example:
``ffmpeg.input('in.mp4').filter_('hflip').output('out.mp4').run()``
``ffmpeg.input('in.mp4').filter('hflip').output('out.mp4').run()``
"""
return FilterNode([parent_node], filter_name, *args, **kwargs)
return filter_multi_output(stream_spec, filter_name, *args, **kwargs).stream()
def filter_multi(parent_nodes, filter_name, *args, **kwargs):
"""Apply custom multi-source filter.
This is nearly identical to the ``filter`` function except that it allows filters to be applied to multiple
streams. It's normally used by higher-level filter functions such as ``concat``, but if a filter implementation
is missing from ``fmpeg-python``, you can call ``filter_multi`` directly.
Note that because it applies to multiple streams, it can't be used as an operator, unlike the ``filter`` function
(e.g. ``ffmpeg.input('in.mp4').filter_('hflip')``)
Args:
parent_nodes: List of source streams to apply filter to.
filter_name: ffmpeg filter name, e.g. `concat`
*args: list of args to pass to ffmpeg verbatim
**kwargs: list of keyword-args to pass to ffmpeg verbatim
For custom single-source filters, see ``filter_multi`` instead.
Example:
``ffmpeg.filter_multi(ffmpeg.input('in1.mp4'), ffmpeg.input('in2.mp4'), 'concat', n=2).output('out.mp4').run()``
@filter_operator()
def filter_(stream_spec, filter_name, *args, **kwargs):
"""Alternate name for ``filter``, so as to not collide with the
built-in python ``filter`` operator.
"""
return FilterNode(parent_nodes, filter_name, *args, **kwargs)
return filter(stream_spec, filter_name, *args, **kwargs)
@filter_operator()
def split(stream):
return FilterNode(stream, split.__name__)
@operator()
def setpts(parent_node, expr):
@filter_operator()
def asplit(stream):
return FilterNode(stream, asplit.__name__)
@filter_operator()
def setpts(stream, expr):
"""Change the PTS (presentation timestamp) of the input frames.
Args:
expr: The expression which is evaluated for each frame to construct its timestamp.
expr: The expression which is evaluated for each frame to construct its
timestamp.
Official documentation: `setpts, asetpts <https://ffmpeg.org/ffmpeg-filters.html#setpts_002c-asetpts>`__
"""
return filter_(parent_node, setpts.__name__, expr)
return FilterNode(stream, setpts.__name__, args=[expr]).stream()
@operator()
def trim(parent_node, **kwargs):
@filter_operator()
def trim(stream, **kwargs):
"""Trim the input so that the output contains one continuous subpart of the input.
Args:
start: Specify the time of the start of the kept section, i.e. the frame with the timestamp start will be the
first frame in the output.
end: Specify the time of the first frame that will be dropped, i.e. the frame immediately preceding the one
with the timestamp end will be the last frame in the output.
start_pts: This is the same as start, except this option sets the start timestamp in timebase units instead of
seconds.
end_pts: This is the same as end, except this option sets the end timestamp in timebase units instead of
seconds.
start: Specify the time of the start of the kept section, i.e. the frame with
the timestamp start will be the first frame in the output.
end: Specify the time of the first frame that will be dropped, i.e. the frame
immediately preceding the one with the timestamp end will be the last frame
in the output.
start_pts: This is the same as start, except this option sets the start
timestamp in timebase units instead of seconds.
end_pts: This is the same as end, except this option sets the end timestamp in
timebase units instead of seconds.
duration: The maximum duration of the output in seconds.
start_frame: The number of the first frame that should be passed to the output.
end_frame: The number of the first frame that should be dropped.
Official documentation: `trim <https://ffmpeg.org/ffmpeg-filters.html#trim>`__
"""
return filter_(parent_node, trim.__name__, **kwargs)
return FilterNode(stream, trim.__name__, kwargs=kwargs).stream()
@operator()
@filter_operator()
def overlay(main_parent_node, overlay_parent_node, eof_action='repeat', **kwargs):
"""Overlay one video on top of another.
Args:
x: Set the expression for the x coordinates of the overlaid video on the main video. Default value is 0. In
case the expression is invalid, it is set to a huge value (meaning that the overlay will not be displayed
within the output visible area).
y: Set the expression for the y coordinates of the overlaid video on the main video. Default value is 0. In
case the expression is invalid, it is set to a huge value (meaning that the overlay will not be displayed
within the output visible area).
eof_action: The action to take when EOF is encountered on the secondary input; it accepts one of the following
values:
x: Set the expression for the x coordinates of the overlaid video on the main
video. Default value is 0. In case the expression is invalid, it is set to
a huge value (meaning that the overlay will not be displayed within the
output visible area).
y: Set the expression for the y coordinates of the overlaid video on the main
video. Default value is 0. In case the expression is invalid, it is set to
a huge value (meaning that the overlay will not be displayed within the
output visible area).
eof_action: The action to take when EOF is encountered on the secondary input;
it accepts one of the following values:
* ``repeat``: Repeat the last frame (the default).
* ``endall``: End both streams.
@ -112,12 +130,13 @@ def overlay(main_parent_node, overlay_parent_node, eof_action='repeat', **kwargs
eval: Set when the expressions for x, and y are evaluated.
It accepts the following values:
* ``init``: only evaluate expressions once during the filter initialization or when a command is
processed
* ``init``: only evaluate expressions once during the filter initialization
or when a command is processed
* ``frame``: evaluate expressions for each incoming frame
Default value is ``frame``.
shortest: If set to 1, force the output to terminate when the shortest input terminates. Default value is 0.
shortest: If set to 1, force the output to terminate when the shortest input
terminates. Default value is 0.
format: Set the format for the output video.
It accepts the following values:
@ -128,48 +147,80 @@ def overlay(main_parent_node, overlay_parent_node, eof_action='repeat', **kwargs
* ``gbrp``: force planar RGB output
Default value is ``yuv420``.
rgb (deprecated): If set to 1, force the filter to accept inputs in the RGB color space. Default value is 0.
This option is deprecated, use format instead.
repeatlast: If set to 1, force the filter to draw the last overlay frame over the main input until the end of
the stream. A value of 0 disables this behavior. Default value is 1.
rgb (deprecated): If set to 1, force the filter to accept inputs in the RGB
color space. Default value is 0. This option is deprecated, use format
instead.
repeatlast: If set to 1, force the filter to draw the last overlay frame over
the main input until the end of the stream. A value of 0 disables this
behavior. Default value is 1.
Official documentation: `overlay <https://ffmpeg.org/ffmpeg-filters.html#overlay-1>`__
"""
kwargs['eof_action'] = eof_action
return filter_multi([main_parent_node, overlay_parent_node], overlay.__name__, **kwargs)
return FilterNode(
[main_parent_node, overlay_parent_node],
overlay.__name__,
kwargs=kwargs,
max_inputs=2,
).stream()
@operator()
def hflip(parent_node):
@filter_operator()
def hflip(stream):
"""Flip the input video horizontally.
Official documentation: `hflip <https://ffmpeg.org/ffmpeg-filters.html#hflip>`__
"""
return filter_(parent_node, hflip.__name__)
return FilterNode(stream, hflip.__name__).stream()
@operator()
def vflip(parent_node):
@filter_operator()
def vflip(stream):
"""Flip the input video vertically.
Official documentation: `vflip <https://ffmpeg.org/ffmpeg-filters.html#vflip>`__
"""
return filter_(parent_node, vflip.__name__)
return FilterNode(stream, vflip.__name__).stream()
@operator()
def drawbox(parent_node, x, y, width, height, color, thickness=None, **kwargs):
@filter_operator()
def crop(stream, x, y, width, height, **kwargs):
"""Crop the input video.
Args:
x: The horizontal position, in the input video, of the left edge of
the output video.
y: The vertical position, in the input video, of the top edge of the
output video.
width: The width of the output video. Must be greater than 0.
height: The height of the output video. Must be greater than 0.
Official documentation: `crop <https://ffmpeg.org/ffmpeg-filters.html#crop>`__
"""
return FilterNode(
stream, crop.__name__, args=[width, height, x, y], kwargs=kwargs
).stream()
@filter_operator()
def drawbox(stream, x, y, width, height, color, thickness=None, **kwargs):
"""Draw a colored box on the input image.
Args:
x: The expression which specifies the top left corner x coordinate of the box. It defaults to 0.
y: The expression which specifies the top left corner y coordinate of the box. It defaults to 0.
width: Specify the width of the box; if 0 interpreted as the input width. It defaults to 0.
heigth: Specify the height of the box; if 0 interpreted as the input height. It defaults to 0.
color: Specify the color of the box to write. For the general syntax of this option, check the "Color" section
in the ffmpeg-utils manual. If the special value invert is used, the box edge color is the same as the
video with inverted luma.
thickness: The expression which sets the thickness of the box edge. Default value is 3.
x: The expression which specifies the top left corner x coordinate of the box.
It defaults to 0.
y: The expression which specifies the top left corner y coordinate of the box.
It defaults to 0.
width: Specify the width of the box; if 0 interpreted as the input width. It
defaults to 0.
height: Specify the height of the box; if 0 interpreted as the input height. It
defaults to 0.
color: Specify the color of the box to write. For the general syntax of this
option, check the "Color" section in the ffmpeg-utils manual. If the
special value invert is used, the box edge color is the same as the video
with inverted luma.
thickness: The expression which sets the thickness of the box edge. Default
value is 3.
w: Alias for ``width``.
h: Alias for ``height``.
c: Alias for ``color``.
@ -179,88 +230,276 @@ def drawbox(parent_node, x, y, width, height, color, thickness=None, **kwargs):
"""
if thickness:
kwargs['t'] = thickness
return filter_(parent_node, drawbox.__name__, x, y, width, height, color, **kwargs)
return FilterNode(
stream, drawbox.__name__, args=[x, y, width, height, color], kwargs=kwargs
).stream()
@operator()
def concat(*parent_nodes, **kwargs):
@filter_operator()
def drawtext(stream, text=None, x=0, y=0, escape_text=True, **kwargs):
"""Draw a text string or text from a specified file on top of a video, using the
libfreetype library.
To enable compilation of this filter, you need to configure FFmpeg with
``--enable-libfreetype``. To enable default font fallback and the font option you
need to configure FFmpeg with ``--enable-libfontconfig``. To enable the
text_shaping option, you need to configure FFmpeg with ``--enable-libfribidi``.
Args:
box: Used to draw a box around text using the background color. The value must
be either 1 (enable) or 0 (disable). The default value of box is 0.
boxborderw: Set the width of the border to be drawn around the box using
boxcolor. The default value of boxborderw is 0.
boxcolor: The color to be used for drawing box around text. For the syntax of
this option, check the "Color" section in the ffmpeg-utils manual. The
default value of boxcolor is "white".
line_spacing: Set the line spacing in pixels of the border to be drawn around
the box using box. The default value of line_spacing is 0.
borderw: Set the width of the border to be drawn around the text using
bordercolor. The default value of borderw is 0.
bordercolor: Set the color to be used for drawing border around text. For the
syntax of this option, check the "Color" section in the ffmpeg-utils
manual. The default value of bordercolor is "black".
expansion: Select how the text is expanded. Can be either none, strftime
(deprecated) or normal (default). See the Text expansion section below for
details.
basetime: Set a start time for the count. Value is in microseconds. Only
applied in the deprecated strftime expansion mode. To emulate in normal
expansion mode use the pts function, supplying the start time (in seconds)
as the second argument.
fix_bounds: If true, check and fix text coords to avoid clipping.
fontcolor: The color to be used for drawing fonts. For the syntax of this
option, check the "Color" section in the ffmpeg-utils manual. The default
value of fontcolor is "black".
fontcolor_expr: String which is expanded the same way as text to obtain dynamic
fontcolor value. By default this option has empty value and is not
processed. When this option is set, it overrides fontcolor option.
font: The font family to be used for drawing text. By default Sans.
fontfile: The font file to be used for drawing text. The path must be included.
This parameter is mandatory if the fontconfig support is disabled.
alpha: Draw the text applying alpha blending. The value can be a number between
0.0 and 1.0. The expression accepts the same variables x, y as well. The
default value is 1. Please see fontcolor_expr.
fontsize: The font size to be used for drawing text. The default value of
fontsize is 16.
text_shaping: If set to 1, attempt to shape the text (for example, reverse the
order of right-to-left text and join Arabic characters) before drawing it.
Otherwise, just draw the text exactly as given. By default 1 (if supported).
ft_load_flags: The flags to be used for loading the fonts. The flags map the
corresponding flags supported by libfreetype, and are a combination of the
following values:
* ``default``
* ``no_scale``
* ``no_hinting``
* ``render``
* ``no_bitmap``
* ``vertical_layout``
* ``force_autohint``
* ``crop_bitmap``
* ``pedantic``
* ``ignore_global_advance_width``
* ``no_recurse``
* ``ignore_transform``
* ``monochrome``
* ``linear_design``
* ``no_autohint``
Default value is "default". For more information consult the documentation
for the FT_LOAD_* libfreetype flags.
shadowcolor: The color to be used for drawing a shadow behind the drawn text.
For the syntax of this option, check the "Color" section in the ffmpeg-utils
manual. The default value of shadowcolor is "black".
shadowx: The x offset for the text shadow position with respect to the position
of the text. It can be either positive or negative values. The default value
is "0".
shadowy: The y offset for the text shadow position with respect to the position
of the text. It can be either positive or negative values. The default value
is "0".
start_number: The starting frame number for the n/frame_num variable. The
default value is "0".
tabsize: The size in number of spaces to use for rendering the tab. Default
value is 4.
timecode: Set the initial timecode representation in "hh:mm:ss[:;.]ff" format.
It can be used with or without text parameter. timecode_rate option must be
specified.
rate: Set the timecode frame rate (timecode only).
timecode_rate: Alias for ``rate``.
r: Alias for ``rate``.
tc24hmax: If set to 1, the output of the timecode option will wrap around at 24
hours. Default is 0 (disabled).
text: The text string to be drawn. The text must be a sequence of UTF-8 encoded
characters. This parameter is mandatory if no file is specified with the
parameter textfile.
textfile: A text file containing text to be drawn. The text must be a sequence
of UTF-8 encoded characters. This parameter is mandatory if no text string
is specified with the parameter text. If both text and textfile are
specified, an error is thrown.
reload: If set to 1, the textfile will be reloaded before each frame. Be sure
to update it atomically, or it may be read partially, or even fail.
x: The expression which specifies the offset where text will be drawn within
the video frame. It is relative to the left border of the output image. The
default value is "0".
y: The expression which specifies the offset where text will be drawn within
the video frame. It is relative to the top border of the output image. The
default value is "0". See below for the list of accepted constants and
functions.
Expression constants:
The parameters for x and y are expressions containing the following constants
and functions:
- dar: input display aspect ratio, it is the same as ``(w / h) * sar``
- hsub: horizontal chroma subsample values. For example for the pixel format
"yuv422p" hsub is 2 and vsub is 1.
- vsub: vertical chroma subsample values. For example for the pixel format
"yuv422p" hsub is 2 and vsub is 1.
- line_h: the height of each text line
- lh: Alias for ``line_h``.
- main_h: the input height
- h: Alias for ``main_h``.
- H: Alias for ``main_h``.
- main_w: the input width
- w: Alias for ``main_w``.
- W: Alias for ``main_w``.
- ascent: the maximum distance from the baseline to the highest/upper grid
coordinate used to place a glyph outline point, for all the rendered glyphs.
It is a positive value, due to the grid's orientation with the Y axis
upwards.
- max_glyph_a: Alias for ``ascent``.
- descent: the maximum distance from the baseline to the lowest grid
coordinate used to place a glyph outline
point, for all the rendered glyphs. This is a negative value, due to the
grid's orientation, with the Y axis upwards.
- max_glyph_d: Alias for ``descent``.
- max_glyph_h: maximum glyph height, that is the maximum height for all the
glyphs contained in the rendered text, it is equivalent to ascent - descent.
- max_glyph_w: maximum glyph width, that is the maximum width for all the
glyphs contained in the rendered text.
- n: the number of input frame, starting from 0
- rand(min, max): return a random number included between min and max
- sar: The input sample aspect ratio.
- t: timestamp expressed in seconds, NAN if the input timestamp is unknown
- text_h: the height of the rendered text
- th: Alias for ``text_h``.
- text_w: the width of the rendered text
- tw: Alias for ``text_w``.
- x: the x offset coordinates where the text is drawn.
- y: the y offset coordinates where the text is drawn.
These parameters allow the x and y expressions to refer each other, so you can
for example specify ``y=x/dar``.
Official documentation: `drawtext <https://ffmpeg.org/ffmpeg-filters.html#drawtext>`__
"""
if text is not None:
if escape_text:
text = escape_chars(text, '\\\'%')
kwargs['text'] = text
if x != 0:
kwargs['x'] = x
if y != 0:
kwargs['y'] = y
return filter(stream, drawtext.__name__, **kwargs)
@filter_operator()
def concat(*streams, **kwargs):
"""Concatenate audio and video streams, joining them together one after the other.
The filter works on segments of synchronized video and audio streams. All segments must have the same number of
streams of each type, and that will also be the number of streams at output.
The filter works on segments of synchronized video and audio streams. All segments
must have the same number of streams of each type, and that will also be the number
of streams at output.
Args:
unsafe: Activate unsafe mode: do not fail if segments have a different format.
Related streams do not always have exactly the same duration, for various reasons including codec frame size or
sloppy authoring. For that reason, related synchronized streams (e.g. a video and its audio track) should be
concatenated at once. The concat filter will use the duration of the longest stream in each segment (except the
last one), and if necessary pad shorter audio streams with silence.
Related streams do not always have exactly the same duration, for various reasons
including codec frame size or sloppy authoring. For that reason, related
synchronized streams (e.g. a video and its audio track) should be concatenated at
once. The concat filter will use the duration of the longest stream in each segment
(except the last one), and if necessary pad shorter audio streams with silence.
For this filter to work correctly, all segments must start at timestamp 0.
All corresponding streams must have the same parameters in all segments; the filtering system will automatically
select a common pixel format for video streams, and a common sample format, sample rate and channel layout for
audio streams, but other settings, such as resolution, must be converted explicitly by the user.
All corresponding streams must have the same parameters in all segments; the
filtering system will automatically select a common pixel format for video streams,
and a common sample format, sample rate and channel layout for audio streams, but
other settings, such as resolution, must be converted explicitly by the user.
Different frame rates are acceptable but will result in variable frame rate at output; be sure to configure the
output file to handle it.
Different frame rates are acceptable but will result in variable frame rate at
output; be sure to configure the output file to handle it.
Official documentation: `concat <https://ffmpeg.org/ffmpeg-filters.html#concat>`__
"""
kwargs['n'] = len(parent_nodes)
return filter_multi(parent_nodes, concat.__name__, **kwargs)
video_stream_count = kwargs.get('v', 1)
audio_stream_count = kwargs.get('a', 0)
stream_count = video_stream_count + audio_stream_count
if len(streams) % stream_count != 0:
raise ValueError(
'Expected concat input streams to have length multiple of {} (v={}, a={}); got {}'.format(
stream_count, video_stream_count, audio_stream_count, len(streams)
)
)
kwargs['n'] = int(len(streams) / stream_count)
return FilterNode(streams, concat.__name__, kwargs=kwargs, max_inputs=None).stream()
@operator()
def zoompan(parent_node, **kwargs):
@filter_operator()
def zoompan(stream, **kwargs):
"""Apply Zoom & Pan effect.
Args:
zoom: Set the zoom expression. Default is 1.
x: Set the x expression. Default is 0.
y: Set the y expression. Default is 0.
d: Set the duration expression in number of frames. This sets for how many number of frames effect will last
for single input image.
d: Set the duration expression in number of frames. This sets for how many
number of frames effect will last for single input image.
s: Set the output image size, default is ``hd720``.
fps: Set the output frame rate, default is 25.
z: Alias for ``zoom``.
Official documentation: `zoompan <https://ffmpeg.org/ffmpeg-filters.html#zoompan>`__
"""
return filter_(parent_node, zoompan.__name__, **kwargs)
return FilterNode(stream, zoompan.__name__, kwargs=kwargs).stream()
@operator()
def hue(parent_node, **kwargs):
@filter_operator()
def hue(stream, **kwargs):
"""Modify the hue and/or the saturation of the input.
Args:
h: Specify the hue angle as a number of degrees. It accepts an expression, and defaults to "0".
s: Specify the saturation in the [-10,10] range. It accepts an expression and defaults to "1".
H: Specify the hue angle as a number of radians. It accepts an expression, and defaults to "0".
b: Specify the brightness in the [-10,10] range. It accepts an expression and defaults to "0".
h: Specify the hue angle as a number of degrees. It accepts an expression, and
defaults to "0".
s: Specify the saturation in the [-10,10] range. It accepts an expression and
defaults to "1".
H: Specify the hue angle as a number of radians. It accepts an expression, and
defaults to "0".
b: Specify the brightness in the [-10,10] range. It accepts an expression and
defaults to "0".
Official documentation: `hue <https://ffmpeg.org/ffmpeg-filters.html#hue>`__
"""
return filter_(parent_node, hue.__name__, **kwargs)
return FilterNode(stream, hue.__name__, kwargs=kwargs).stream()
@operator()
def colorchannelmixer(parent_node, *args, **kwargs):
@filter_operator()
def colorchannelmixer(stream, *args, **kwargs):
"""Adjust video input frames by re-mixing color channels.
Official documentation: `colorchannelmixer <https://ffmpeg.org/ffmpeg-filters.html#colorchannelmixer>`__
"""
return filter_(parent_node, colorchannelmixer.__name__, **kwargs)
return FilterNode(stream, colorchannelmixer.__name__, kwargs=kwargs).stream()
__all__ = [
'colorchannelmixer',
'concat',
'crop',
'drawbox',
'drawtext',
'filter',
'filter_',
'filter_multi',
'filter_multi_output',
'hflip',
'hue',
'overlay',

30
ffmpeg/_probe.py Normal file
View File

@ -0,0 +1,30 @@
import json
import subprocess
from ._run import Error
from ._utils import convert_kwargs_to_cmd_line_args
def probe(filename, cmd='ffprobe', timeout=None, **kwargs):
"""Run ffprobe on the specified file and return a JSON representation of the output.
Raises:
:class:`ffmpeg.Error`: if ffprobe returns a non-zero exit code,
an :class:`Error` is returned with a generic error message.
The stderr output can be retrieved by accessing the
``stderr`` property of the exception.
"""
args = [cmd, '-show_format', '-show_streams', '-of', 'json']
args += convert_kwargs_to_cmd_line_args(kwargs)
args += [filename]
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
communicate_kwargs = {}
if timeout is not None:
communicate_kwargs['timeout'] = timeout
out, err = p.communicate(**communicate_kwargs)
if p.returncode != 0:
raise Error('ffprobe', out, err)
return json.loads(out.decode('utf-8'))
__all__ = ['probe']

View File

@ -1,41 +1,40 @@
from __future__ import unicode_literals
from .dag import get_outgoing_edges, topo_sort
from ._utils import basestring, convert_kwargs_to_cmd_line_args
from builtins import str
from functools import reduce
from past.builtins import basestring
import copy
import operator as _operator
import subprocess as _subprocess
import operator
import subprocess
from ._ffmpeg import (
input,
merge_outputs,
output,
overwrite_output,
)
from ._ffmpeg import input, output
from .nodes import (
get_stream_spec_nodes,
FilterNode,
GlobalNode,
InputNode,
operator,
InputNode,
OutputNode,
output_operator,
)
def _get_stream_name(name):
return '[{}]'.format(name)
try:
from collections.abc import Iterable
except ImportError:
from collections import Iterable
def _convert_kwargs_to_cmd_line_args(kwargs):
args = []
for k in sorted(kwargs.keys()):
v = kwargs[k]
args.append('-{}'.format(k))
if v:
args.append('{}'.format(v))
return args
class Error(Exception):
def __init__(self, cmd, stdout, stderr):
super(Error, self).__init__(
'{} error (see stderr output for detail)'.format(cmd)
)
self.stdout = stdout
self.stderr = stderr
def _get_input_args(input_node):
if input_node._name == input.__name__:
kwargs = copy.copy(input_node._kwargs)
if input_node.name == input.__name__:
kwargs = copy.copy(input_node.kwargs)
filename = kwargs.pop('filename')
fmt = kwargs.pop('format', None)
video_size = kwargs.pop('video_size', None)
@ -44,107 +43,305 @@ def _get_input_args(input_node):
args += ['-f', fmt]
if video_size:
args += ['-video_size', '{}x{}'.format(video_size[0], video_size[1])]
args += _convert_kwargs_to_cmd_line_args(kwargs)
args += convert_kwargs_to_cmd_line_args(kwargs)
args += ['-i', filename]
else:
assert False, 'Unsupported input node: {}'.format(input_node)
raise ValueError('Unsupported input node: {}'.format(input_node))
return args
def _topo_sort(start_node):
marked_nodes = []
sorted_nodes = []
child_map = {}
def visit(node, child):
assert node not in marked_nodes, 'Graph is not a DAG'
if child is not None:
if node not in child_map:
child_map[node] = []
child_map[node].append(child)
if node not in sorted_nodes:
marked_nodes.append(node)
[visit(parent, node) for parent in node._parents]
marked_nodes.remove(node)
sorted_nodes.append(node)
unmarked_nodes = [start_node]
while unmarked_nodes:
visit(unmarked_nodes.pop(), None)
return sorted_nodes, child_map
def _format_input_stream_name(stream_name_map, edge, is_final_arg=False):
prefix = stream_name_map[edge.upstream_node, edge.upstream_label]
if not edge.upstream_selector:
suffix = ''
else:
suffix = ':{}'.format(edge.upstream_selector)
if is_final_arg and isinstance(edge.upstream_node, InputNode):
## Special case: `-map` args should not have brackets for input
## nodes.
fmt = '{}{}'
else:
fmt = '[{}{}]'
return fmt.format(prefix, suffix)
def _get_filter_spec(i, node, stream_name_map):
stream_name = _get_stream_name('v{}'.format(i))
stream_name_map[node] = stream_name
inputs = [stream_name_map[parent] for parent in node._parents]
filter_spec = '{}{}{}'.format(''.join(inputs), node._get_filter(), stream_name)
def _format_output_stream_name(stream_name_map, edge):
return '[{}]'.format(stream_name_map[edge.upstream_node, edge.upstream_label])
def _get_filter_spec(node, outgoing_edge_map, stream_name_map):
incoming_edges = node.incoming_edges
outgoing_edges = get_outgoing_edges(node, outgoing_edge_map)
inputs = [
_format_input_stream_name(stream_name_map, edge) for edge in incoming_edges
]
outputs = [
_format_output_stream_name(stream_name_map, edge) for edge in outgoing_edges
]
filter_spec = '{}{}{}'.format(
''.join(inputs), node._get_filter(outgoing_edges), ''.join(outputs)
)
return filter_spec
def _get_filter_arg(filter_nodes, stream_name_map):
filter_specs = [_get_filter_spec(i, node, stream_name_map) for i, node in enumerate(filter_nodes)]
def _allocate_filter_stream_names(filter_nodes, outgoing_edge_maps, stream_name_map):
stream_count = 0
for upstream_node in filter_nodes:
outgoing_edge_map = outgoing_edge_maps[upstream_node]
for upstream_label, downstreams in sorted(outgoing_edge_map.items()):
if len(downstreams) > 1:
# TODO: automatically insert `splits` ahead of time via graph transformation.
raise ValueError(
'Encountered {} with multiple outgoing edges with same upstream '
'label {!r}; a `split` filter is probably required'.format(
upstream_node, upstream_label
)
)
stream_name_map[upstream_node, upstream_label] = 's{}'.format(stream_count)
stream_count += 1
def _get_filter_arg(filter_nodes, outgoing_edge_maps, stream_name_map):
_allocate_filter_stream_names(filter_nodes, outgoing_edge_maps, stream_name_map)
filter_specs = [
_get_filter_spec(node, outgoing_edge_maps[node], stream_name_map)
for node in filter_nodes
]
return ';'.join(filter_specs)
def _get_global_args(node):
if node._name == overwrite_output.__name__:
return ['-y']
else:
assert False, 'Unsupported global node: {}'.format(node)
return list(node.args)
def _get_output_args(node, stream_name_map):
if node.name != output.__name__:
raise ValueError('Unsupported output node: {}'.format(node))
args = []
if node._name != merge_outputs.__name__:
stream_name = stream_name_map[node._parents[0]]
if stream_name != '[0]':
if len(node.incoming_edges) == 0:
raise ValueError('Output node {} has no mapped streams'.format(node))
for edge in node.incoming_edges:
# edge = node.incoming_edges[0]
stream_name = _format_input_stream_name(
stream_name_map, edge, is_final_arg=True
)
if stream_name != '0' or len(node.incoming_edges) > 1:
args += ['-map', stream_name]
if node._name == output.__name__:
kwargs = copy.copy(node._kwargs)
filename = kwargs.pop('filename')
fmt = kwargs.pop('format', None)
if fmt:
args += ['-f', fmt]
args += _convert_kwargs_to_cmd_line_args(kwargs)
args += [filename]
else:
assert False, 'Unsupported output node: {}'.format(node)
kwargs = copy.copy(node.kwargs)
filename = kwargs.pop('filename')
if 'format' in kwargs:
args += ['-f', kwargs.pop('format')]
if 'video_bitrate' in kwargs:
args += ['-b:v', str(kwargs.pop('video_bitrate'))]
if 'audio_bitrate' in kwargs:
args += ['-b:a', str(kwargs.pop('audio_bitrate'))]
if 'video_size' in kwargs:
video_size = kwargs.pop('video_size')
if not isinstance(video_size, basestring) and isinstance(video_size, Iterable):
video_size = '{}x{}'.format(video_size[0], video_size[1])
args += ['-video_size', video_size]
args += convert_kwargs_to_cmd_line_args(kwargs)
args += [filename]
return args
@operator(node_classes={OutputNode, GlobalNode})
def get_args(node):
"""Get command-line arguments for ffmpeg."""
@output_operator()
def get_args(stream_spec, overwrite_output=False):
"""Build command-line arguments to be passed to ffmpeg."""
nodes = get_stream_spec_nodes(stream_spec)
args = []
# TODO: group nodes together, e.g. `-i somefile -r somerate`.
sorted_nodes, child_map = _topo_sort(node)
del(node)
sorted_nodes, outgoing_edge_maps = topo_sort(nodes)
input_nodes = [node for node in sorted_nodes if isinstance(node, InputNode)]
output_nodes = [node for node in sorted_nodes if isinstance(node, OutputNode) and not
isinstance(node, GlobalNode)]
output_nodes = [node for node in sorted_nodes if isinstance(node, OutputNode)]
global_nodes = [node for node in sorted_nodes if isinstance(node, GlobalNode)]
filter_nodes = [node for node in sorted_nodes if node not in (input_nodes + output_nodes + global_nodes)]
stream_name_map = {node: _get_stream_name(i) for i, node in enumerate(input_nodes)}
filter_arg = _get_filter_arg(filter_nodes, stream_name_map)
args += reduce(_operator.add, [_get_input_args(node) for node in input_nodes])
filter_nodes = [node for node in sorted_nodes if isinstance(node, FilterNode)]
stream_name_map = {(node, None): str(i) for i, node in enumerate(input_nodes)}
filter_arg = _get_filter_arg(filter_nodes, outgoing_edge_maps, stream_name_map)
args += reduce(operator.add, [_get_input_args(node) for node in input_nodes])
if filter_arg:
args += ['-filter_complex', filter_arg]
args += reduce(_operator.add, [_get_output_args(node, stream_name_map) for node in output_nodes])
args += reduce(_operator.add, [_get_global_args(node) for node in global_nodes], [])
args += reduce(
operator.add, [_get_output_args(node, stream_name_map) for node in output_nodes]
)
args += reduce(operator.add, [_get_global_args(node) for node in global_nodes], [])
if overwrite_output:
args += ['-y']
return args
@operator(node_classes={OutputNode, GlobalNode})
def run(node, cmd='ffmpeg'):
"""Run ffmpeg on node graph."""
@output_operator()
def compile(stream_spec, cmd='ffmpeg', overwrite_output=False):
"""Build command-line for invoking ffmpeg.
The :meth:`run` function uses this to build the command line
arguments and should work in most cases, but calling this function
directly is useful for debugging or if you need to invoke ffmpeg
manually for whatever reason.
This is the same as calling :meth:`get_args` except that it also
includes the ``ffmpeg`` command as the first argument.
"""
if isinstance(cmd, basestring):
cmd = [cmd]
elif type(cmd) != list:
cmd = list(cmd)
args = cmd + node.get_args()
_subprocess.check_call(args)
return cmd + get_args(stream_spec, overwrite_output=overwrite_output)
@output_operator()
def run_async(
stream_spec,
cmd='ffmpeg',
pipe_stdin=False,
pipe_stdout=False,
pipe_stderr=False,
quiet=False,
overwrite_output=False,
cwd=None,
):
"""Asynchronously invoke ffmpeg for the supplied node graph.
Args:
pipe_stdin: if True, connect pipe to subprocess stdin (to be
used with ``pipe:`` ffmpeg inputs).
pipe_stdout: if True, connect pipe to subprocess stdout (to be
used with ``pipe:`` ffmpeg outputs).
pipe_stderr: if True, connect pipe to subprocess stderr.
quiet: shorthand for setting ``capture_stdout`` and
``capture_stderr``.
**kwargs: keyword-arguments passed to ``get_args()`` (e.g.
``overwrite_output=True``).
Returns:
A `subprocess Popen`_ object representing the child process.
Examples:
Run and stream input::
process = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(width, height))
.output(out_filename, pix_fmt='yuv420p')
.overwrite_output()
.run_async(pipe_stdin=True)
)
process.communicate(input=input_data)
Run and capture output::
process = (
ffmpeg
.input(in_filename)
.output('pipe:', format='rawvideo', pix_fmt='rgb24')
.run_async(pipe_stdout=True, pipe_stderr=True)
)
out, err = process.communicate()
Process video frame-by-frame using numpy::
process1 = (
ffmpeg
.input(in_filename)
.output('pipe:', format='rawvideo', pix_fmt='rgb24')
.run_async(pipe_stdout=True)
)
process2 = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(width, height))
.output(out_filename, pix_fmt='yuv420p')
.overwrite_output()
.run_async(pipe_stdin=True)
)
while True:
in_bytes = process1.stdout.read(width * height * 3)
if not in_bytes:
break
in_frame = (
np
.frombuffer(in_bytes, np.uint8)
.reshape([height, width, 3])
)
out_frame = in_frame * 0.3
process2.stdin.write(
frame
.astype(np.uint8)
.tobytes()
)
process2.stdin.close()
process1.wait()
process2.wait()
.. _subprocess Popen: https://docs.python.org/3/library/subprocess.html#popen-objects
"""
args = compile(stream_spec, cmd, overwrite_output=overwrite_output)
stdin_stream = subprocess.PIPE if pipe_stdin else None
stdout_stream = subprocess.PIPE if pipe_stdout else None
stderr_stream = subprocess.PIPE if pipe_stderr else None
if quiet:
stderr_stream = subprocess.STDOUT
stdout_stream = subprocess.DEVNULL
return subprocess.Popen(
args,
stdin=stdin_stream,
stdout=stdout_stream,
stderr=stderr_stream,
cwd=cwd,
)
@output_operator()
def run(
stream_spec,
cmd='ffmpeg',
capture_stdout=False,
capture_stderr=False,
input=None,
quiet=False,
overwrite_output=False,
cwd=None,
):
"""Invoke ffmpeg for the supplied node graph.
Args:
capture_stdout: if True, capture stdout (to be used with
``pipe:`` ffmpeg outputs).
capture_stderr: if True, capture stderr.
quiet: shorthand for setting ``capture_stdout`` and ``capture_stderr``.
input: text to be sent to stdin (to be used with ``pipe:``
ffmpeg inputs)
**kwargs: keyword-arguments passed to ``get_args()`` (e.g.
``overwrite_output=True``).
Returns: (out, err) tuple containing captured stdout and stderr data.
"""
process = run_async(
stream_spec,
cmd,
pipe_stdin=input is not None,
pipe_stdout=capture_stdout,
pipe_stderr=capture_stderr,
quiet=quiet,
overwrite_output=overwrite_output,
cwd=cwd,
)
out, err = process.communicate(input)
retcode = process.poll()
if retcode:
raise Error('ffmpeg', out, err)
return out, err
__all__ = [
'compile',
'Error',
'get_args',
'run',
'run_async',
]

108
ffmpeg/_utils.py Normal file
View File

@ -0,0 +1,108 @@
from __future__ import unicode_literals
from builtins import str
from past.builtins import basestring
import hashlib
import sys
if sys.version_info.major == 2:
# noinspection PyUnresolvedReferences,PyShadowingBuiltins
str = str
try:
from collections.abc import Iterable
except ImportError:
from collections import Iterable
# `past.builtins.basestring` module can't be imported on Python3 in some environments (Ubuntu).
# This code is copy-pasted from it to avoid crashes.
class BaseBaseString(type):
def __instancecheck__(cls, instance):
return isinstance(instance, (bytes, str))
def __subclasshook__(cls, thing):
# TODO: What should go here?
raise NotImplemented
def with_metaclass(meta, *bases):
class metaclass(meta):
__call__ = type.__call__
__init__ = type.__init__
def __new__(cls, name, this_bases, d):
if this_bases is None:
return type.__new__(cls, name, (), d)
return meta(name, bases, d)
return metaclass('temporary_class', None, {})
if sys.version_info.major >= 3:
class basestring(with_metaclass(BaseBaseString)):
pass
else:
# noinspection PyUnresolvedReferences,PyCompatibility
from builtins import basestring
def _recursive_repr(item):
"""Hack around python `repr` to deterministically represent dictionaries.
This is able to represent more things than json.dumps, since it does not require
things to be JSON serializable (e.g. datetimes).
"""
if isinstance(item, basestring):
result = str(item)
elif isinstance(item, list):
result = '[{}]'.format(', '.join([_recursive_repr(x) for x in item]))
elif isinstance(item, dict):
kv_pairs = [
'{}: {}'.format(_recursive_repr(k), _recursive_repr(item[k]))
for k in sorted(item)
]
result = '{' + ', '.join(kv_pairs) + '}'
else:
result = repr(item)
return result
def get_hash(item):
repr_ = _recursive_repr(item).encode('utf-8')
return hashlib.md5(repr_).hexdigest()
def get_hash_int(item):
return int(get_hash(item), base=16)
def escape_chars(text, chars):
"""Helper function to escape uncomfortable characters."""
text = str(text)
chars = list(set(chars))
if '\\' in chars:
chars.remove('\\')
chars.insert(0, '\\')
for ch in chars:
text = text.replace(ch, '\\' + ch)
return text
def convert_kwargs_to_cmd_line_args(kwargs):
"""Helper function to build command line arguments out of dict."""
args = []
for k in sorted(kwargs.keys()):
v = kwargs[k]
if isinstance(v, Iterable) and not isinstance(v, str):
for value in v:
args.append('-{}'.format(k))
if value is not None:
args.append('{}'.format(value))
continue
args.append('-{}'.format(k))
if v is not None:
args.append('{}'.format(v))
return args

108
ffmpeg/_view.py Normal file
View File

@ -0,0 +1,108 @@
from __future__ import unicode_literals
from builtins import str
from .dag import get_outgoing_edges
from ._run import topo_sort
import tempfile
from ffmpeg.nodes import (
FilterNode,
get_stream_spec_nodes,
InputNode,
OutputNode,
stream_operator,
)
_RIGHT_ARROW = '\u2192'
def _get_node_color(node):
if isinstance(node, InputNode):
color = '#99cc00'
elif isinstance(node, OutputNode):
color = '#99ccff'
elif isinstance(node, FilterNode):
color = '#ffcc00'
else:
color = None
return color
@stream_operator()
def view(stream_spec, detail=False, filename=None, pipe=False, **kwargs):
try:
import graphviz
except ImportError:
raise ImportError(
'failed to import graphviz; please make sure graphviz is installed (e.g. '
'`pip install graphviz`)'
)
show_labels = kwargs.pop('show_labels', True)
if pipe and filename is not None:
raise ValueError('Can\'t specify both `filename` and `pipe`')
elif not pipe and filename is None:
filename = tempfile.mktemp()
nodes = get_stream_spec_nodes(stream_spec)
sorted_nodes, outgoing_edge_maps = topo_sort(nodes)
graph = graphviz.Digraph(format='png')
graph.attr(rankdir='LR')
if len(list(kwargs.keys())) != 0:
raise ValueError(
'Invalid kwargs key(s): {}'.format(', '.join(list(kwargs.keys())))
)
for node in sorted_nodes:
color = _get_node_color(node)
if detail:
lines = [node.short_repr]
lines += ['{!r}'.format(arg) for arg in node.args]
lines += [
'{}={!r}'.format(key, node.kwargs[key]) for key in sorted(node.kwargs)
]
node_text = '\n'.join(lines)
else:
node_text = node.short_repr
graph.node(
str(hash(node)), node_text, shape='box', style='filled', fillcolor=color
)
outgoing_edge_map = outgoing_edge_maps.get(node, {})
for edge in get_outgoing_edges(node, outgoing_edge_map):
kwargs = {}
up_label = edge.upstream_label
down_label = edge.downstream_label
up_selector = edge.upstream_selector
if show_labels and (
up_label is not None
or down_label is not None
or up_selector is not None
):
if up_label is None:
up_label = ''
if up_selector is not None:
up_label += ":" + up_selector
if down_label is None:
down_label = ''
if up_label != '' and down_label != '':
middle = ' {} '.format(_RIGHT_ARROW)
else:
middle = ''
kwargs['label'] = '{} {} {}'.format(up_label, middle, down_label)
upstream_node_id = str(hash(edge.upstream_node))
downstream_node_id = str(hash(edge.downstream_node))
graph.edge(upstream_node_id, downstream_node_id, **kwargs)
if pipe:
return graph.pipe()
else:
graph.view(filename, cleanup=True)
return stream_spec
__all__ = ['view']

240
ffmpeg/dag.py Normal file
View File

@ -0,0 +1,240 @@
from __future__ import unicode_literals
from ._utils import get_hash, get_hash_int
from builtins import object
from collections import namedtuple
class DagNode(object):
"""Node in a directed-acyclic graph (DAG).
Edges:
DagNodes are connected by edges. An edge connects two nodes with a label for
each side:
- ``upstream_node``: upstream/parent node
- ``upstream_label``: label on the outgoing side of the upstream node
- ``downstream_node``: downstream/child node
- ``downstream_label``: label on the incoming side of the downstream node
For example, DagNode A may be connected to DagNode B with an edge labelled
"foo" on A's side, and "bar" on B's side:
_____ _____
| | | |
| A >[foo]---[bar]> B |
|_____| |_____|
Edge labels may be integers or strings, and nodes cannot have more than one
incoming edge with the same label.
DagNodes may have any number of incoming edges and any number of outgoing
edges. DagNodes keep track only of their incoming edges, but the entire graph
structure can be inferred by looking at the furthest downstream nodes and
working backwards.
Hashing:
DagNodes must be hashable, and two nodes are considered to be equivalent if
they have the same hash value.
Nodes are immutable, and the hash should remain constant as a result. If a
node with new contents is required, create a new node and throw the old one
away.
String representation:
In order for graph visualization tools to show useful information, nodes must
be representable as strings. The ``repr`` operator should provide a more or
less "full" representation of the node, and the ``short_repr`` property should
be a shortened, concise representation.
Again, because nodes are immutable, the string representations should remain
constant.
"""
def __hash__(self):
"""Return an integer hash of the node."""
raise NotImplementedError()
def __eq__(self, other):
"""Compare two nodes; implementations should return True if (and only if)
hashes match.
"""
raise NotImplementedError()
def __repr__(self, other):
"""Return a full string representation of the node."""
raise NotImplementedError()
@property
def short_repr(self):
"""Return a partial/concise representation of the node."""
raise NotImplementedError()
@property
def incoming_edge_map(self):
"""Provides information about all incoming edges that connect to this node.
The edge map is a dictionary that maps an ``incoming_label`` to
``(outgoing_node, outgoing_label)``. Note that implicitly, ``incoming_node`` is
``self``. See "Edges" section above.
"""
raise NotImplementedError()
DagEdge = namedtuple(
'DagEdge',
[
'downstream_node',
'downstream_label',
'upstream_node',
'upstream_label',
'upstream_selector',
],
)
def get_incoming_edges(downstream_node, incoming_edge_map):
edges = []
for downstream_label, upstream_info in list(incoming_edge_map.items()):
upstream_node, upstream_label, upstream_selector = upstream_info
edges += [
DagEdge(
downstream_node,
downstream_label,
upstream_node,
upstream_label,
upstream_selector,
)
]
return edges
def get_outgoing_edges(upstream_node, outgoing_edge_map):
edges = []
for upstream_label, downstream_infos in sorted(outgoing_edge_map.items()):
for downstream_info in downstream_infos:
downstream_node, downstream_label, downstream_selector = downstream_info
edges += [
DagEdge(
downstream_node,
downstream_label,
upstream_node,
upstream_label,
downstream_selector,
)
]
return edges
class KwargReprNode(DagNode):
"""A DagNode that can be represented as a set of args+kwargs."""
@property
def __upstream_hashes(self):
hashes = []
for downstream_label, upstream_info in list(self.incoming_edge_map.items()):
upstream_node, upstream_label, upstream_selector = upstream_info
hashes += [
hash(x)
for x in [
downstream_label,
upstream_node,
upstream_label,
upstream_selector,
]
]
return hashes
@property
def __inner_hash(self):
props = {'args': self.args, 'kwargs': self.kwargs}
return get_hash(props)
def __get_hash(self):
hashes = self.__upstream_hashes + [self.__inner_hash]
return get_hash_int(hashes)
def __init__(self, incoming_edge_map, name, args, kwargs):
self.__incoming_edge_map = incoming_edge_map
self.name = name
self.args = args
self.kwargs = kwargs
self.__hash = self.__get_hash()
def __hash__(self):
return self.__hash
def __eq__(self, other):
return hash(self) == hash(other)
@property
def short_hash(self):
return '{:x}'.format(abs(hash(self)))[:12]
def long_repr(self, include_hash=True):
formatted_props = ['{!r}'.format(arg) for arg in self.args]
formatted_props += [
'{}={!r}'.format(key, self.kwargs[key]) for key in sorted(self.kwargs)
]
out = '{}({})'.format(self.name, ', '.join(formatted_props))
if include_hash:
out += ' <{}>'.format(self.short_hash)
return out
def __repr__(self):
return self.long_repr()
@property
def incoming_edges(self):
return get_incoming_edges(self, self.incoming_edge_map)
@property
def incoming_edge_map(self):
return self.__incoming_edge_map
@property
def short_repr(self):
return self.name
def topo_sort(downstream_nodes):
marked_nodes = []
sorted_nodes = []
outgoing_edge_maps = {}
def visit(
upstream_node,
upstream_label,
downstream_node,
downstream_label,
downstream_selector=None,
):
if upstream_node in marked_nodes:
raise RuntimeError('Graph is not a DAG')
if downstream_node is not None:
outgoing_edge_map = outgoing_edge_maps.get(upstream_node, {})
outgoing_edge_infos = outgoing_edge_map.get(upstream_label, [])
outgoing_edge_infos += [
(downstream_node, downstream_label, downstream_selector)
]
outgoing_edge_map[upstream_label] = outgoing_edge_infos
outgoing_edge_maps[upstream_node] = outgoing_edge_map
if upstream_node not in sorted_nodes:
marked_nodes.append(upstream_node)
for edge in upstream_node.incoming_edges:
visit(
edge.upstream_node,
edge.upstream_label,
edge.downstream_node,
edge.downstream_label,
edge.upstream_selector,
)
marked_nodes.remove(upstream_node)
sorted_nodes.append(upstream_node)
unmarked_nodes = [(node, None) for node in downstream_nodes]
while unmarked_nodes:
upstream_node, upstream_label = unmarked_nodes.pop()
visit(upstream_node, upstream_label, None, None)
return sorted_nodes, outgoing_edge_maps

View File

@ -1,72 +1,380 @@
from __future__ import unicode_literals
from past.builtins import basestring
from .dag import KwargReprNode
from ._utils import escape_chars, get_hash_int
from builtins import object
import hashlib
import json
import os
class Node(object):
"""Node base"""
def __init__(self, parents, name, *args, **kwargs):
parent_hashes = [parent._hash for parent in parents]
assert len(parent_hashes) == len(set(parent_hashes)), 'Same node cannot be included as parent multiple times'
self._parents = parents
self._name = name
self._args = args
self._kwargs = kwargs
self._update_hash()
def _is_of_types(obj, types):
valid = False
for stream_type in types:
if isinstance(obj, stream_type):
valid = True
break
return valid
def __repr__(self):
formatted_props = ['{}'.format(arg) for arg in self._args]
formatted_props += ['{}={!r}'.format(key, self._kwargs[key]) for key in sorted(self._kwargs)]
return '{}({})'.format(self._name, ','.join(formatted_props))
def _get_types_str(types):
return ', '.join(['{}.{}'.format(x.__module__, x.__name__) for x in types])
class Stream(object):
"""Represents the outgoing edge of an upstream node; may be used to create more
downstream nodes.
"""
def __init__(
self, upstream_node, upstream_label, node_types, upstream_selector=None
):
if not _is_of_types(upstream_node, node_types):
raise TypeError(
'Expected upstream node to be of one of the following type(s): {}; got {}'.format(
_get_types_str(node_types), type(upstream_node)
)
)
self.node = upstream_node
self.label = upstream_label
self.selector = upstream_selector
def __hash__(self):
return int(self._hash, base=16)
return get_hash_int([hash(self.node), hash(self.label)])
def __eq__(self, other):
return self._hash == other._hash
return hash(self) == hash(other)
def _update_hash(self):
props = {'args': self._args, 'kwargs': self._kwargs}
my_hash = hashlib.md5(json.dumps(props, sort_keys=True).encode('utf-8')).hexdigest()
parent_hashes = [parent._hash for parent in self._parents]
hashes = parent_hashes + [my_hash]
self._hash = hashlib.md5(','.join(hashes).encode('utf-8')).hexdigest()
def __repr__(self):
node_repr = self.node.long_repr(include_hash=False)
selector = ''
if self.selector:
selector = ':{}'.format(self.selector)
out = '{}[{!r}{}] <{}>'.format(
node_repr, self.label, selector, self.node.short_hash
)
return out
def __getitem__(self, index):
"""
Select a component (audio, video) of the stream.
Example:
Process the audio and video portions of a stream independently::
input = ffmpeg.input('in.mp4')
audio = input['a'].filter("aecho", 0.8, 0.9, 1000, 0.3)
video = input['v'].hflip()
out = ffmpeg.output(audio, video, 'out.mp4')
"""
if self.selector is not None:
raise ValueError('Stream already has a selector: {}'.format(self))
elif not isinstance(index, basestring):
raise TypeError("Expected string index (e.g. 'a'); got {!r}".format(index))
return self.node.stream(label=self.label, selector=index)
@property
def audio(self):
"""Select the audio-portion of a stream.
Some ffmpeg filters drop audio streams, and care must be taken
to preserve the audio in the final output. The ``.audio`` and
``.video`` operators can be used to reference the audio/video
portions of a stream so that they can be processed separately
and then re-combined later in the pipeline. This dilemma is
intrinsic to ffmpeg, and ffmpeg-python tries to stay out of the
way while users may refer to the official ffmpeg documentation
as to why certain filters drop audio.
``stream.audio`` is a shorthand for ``stream['a']``.
Example:
Process the audio and video portions of a stream independently::
input = ffmpeg.input('in.mp4')
audio = input.audio.filter("aecho", 0.8, 0.9, 1000, 0.3)
video = input.video.hflip()
out = ffmpeg.output(audio, video, 'out.mp4')
"""
return self['a']
@property
def video(self):
"""Select the video-portion of a stream.
Some ffmpeg filters drop audio streams, and care must be taken
to preserve the audio in the final output. The ``.audio`` and
``.video`` operators can be used to reference the audio/video
portions of a stream so that they can be processed separately
and then re-combined later in the pipeline. This dilemma is
intrinsic to ffmpeg, and ffmpeg-python tries to stay out of the
way while users may refer to the official ffmpeg documentation
as to why certain filters drop audio.
``stream.video`` is a shorthand for ``stream['v']``.
Example:
Process the audio and video portions of a stream independently::
input = ffmpeg.input('in.mp4')
audio = input.audio.filter("aecho", 0.8, 0.9, 1000, 0.3)
video = input.video.hflip()
out = ffmpeg.output(audio, video, 'out.mp4')
"""
return self['v']
def get_stream_map(stream_spec):
if stream_spec is None:
stream_map = {}
elif isinstance(stream_spec, Stream):
stream_map = {None: stream_spec}
elif isinstance(stream_spec, (list, tuple)):
stream_map = dict(enumerate(stream_spec))
elif isinstance(stream_spec, dict):
stream_map = stream_spec
return stream_map
def get_stream_map_nodes(stream_map):
nodes = []
for stream in list(stream_map.values()):
if not isinstance(stream, Stream):
raise TypeError('Expected Stream; got {}'.format(type(stream)))
nodes.append(stream.node)
return nodes
def get_stream_spec_nodes(stream_spec):
stream_map = get_stream_map(stream_spec)
return get_stream_map_nodes(stream_map)
class Node(KwargReprNode):
"""Node base"""
@classmethod
def __check_input_len(cls, stream_map, min_inputs, max_inputs):
if min_inputs is not None and len(stream_map) < min_inputs:
raise ValueError(
'Expected at least {} input stream(s); got {}'.format(
min_inputs, len(stream_map)
)
)
elif max_inputs is not None and len(stream_map) > max_inputs:
raise ValueError(
'Expected at most {} input stream(s); got {}'.format(
max_inputs, len(stream_map)
)
)
@classmethod
def __check_input_types(cls, stream_map, incoming_stream_types):
for stream in list(stream_map.values()):
if not _is_of_types(stream, incoming_stream_types):
raise TypeError(
'Expected incoming stream(s) to be of one of the following types: {}; got {}'.format(
_get_types_str(incoming_stream_types), type(stream)
)
)
@classmethod
def __get_incoming_edge_map(cls, stream_map):
incoming_edge_map = {}
for downstream_label, upstream in list(stream_map.items()):
incoming_edge_map[downstream_label] = (
upstream.node,
upstream.label,
upstream.selector,
)
return incoming_edge_map
def __init__(
self,
stream_spec,
name,
incoming_stream_types,
outgoing_stream_type,
min_inputs,
max_inputs,
args=[],
kwargs={},
):
stream_map = get_stream_map(stream_spec)
self.__check_input_len(stream_map, min_inputs, max_inputs)
self.__check_input_types(stream_map, incoming_stream_types)
incoming_edge_map = self.__get_incoming_edge_map(stream_map)
super(Node, self).__init__(incoming_edge_map, name, args, kwargs)
self.__outgoing_stream_type = outgoing_stream_type
self.__incoming_stream_types = incoming_stream_types
def stream(self, label=None, selector=None):
"""Create an outgoing stream originating from this node.
More nodes may be attached onto the outgoing stream.
"""
return self.__outgoing_stream_type(self, label, upstream_selector=selector)
def __getitem__(self, item):
"""Create an outgoing stream originating from this node; syntactic sugar for
``self.stream(label)``. It can also be used to apply a selector: e.g.
``node[0:'a']`` returns a stream with label 0 and selector ``'a'``, which is
the same as ``node.stream(label=0, selector='a')``.
Example:
Process the audio and video portions of a stream independently::
input = ffmpeg.input('in.mp4')
audio = input[:'a'].filter("aecho", 0.8, 0.9, 1000, 0.3)
video = input[:'v'].hflip()
out = ffmpeg.output(audio, video, 'out.mp4')
"""
if isinstance(item, slice):
return self.stream(label=item.start, selector=item.stop)
else:
return self.stream(label=item)
class FilterableStream(Stream):
def __init__(self, upstream_node, upstream_label, upstream_selector=None):
super(FilterableStream, self).__init__(
upstream_node, upstream_label, {InputNode, FilterNode}, upstream_selector
)
# noinspection PyMethodOverriding
class InputNode(Node):
"""InputNode type"""
def __init__(self, name, *args, **kwargs):
super(InputNode, self).__init__(parents=[], name=name, *args, **kwargs)
def __init__(self, name, args=[], kwargs={}):
super(InputNode, self).__init__(
stream_spec=None,
name=name,
incoming_stream_types={},
outgoing_stream_type=FilterableStream,
min_inputs=0,
max_inputs=0,
args=args,
kwargs=kwargs,
)
@property
def short_repr(self):
return os.path.basename(self.kwargs['filename'])
# noinspection PyMethodOverriding
class FilterNode(Node):
def __init__(self, stream_spec, name, max_inputs=1, args=[], kwargs={}):
super(FilterNode, self).__init__(
stream_spec=stream_spec,
name=name,
incoming_stream_types={FilterableStream},
outgoing_stream_type=FilterableStream,
min_inputs=1,
max_inputs=max_inputs,
args=args,
kwargs=kwargs,
)
"""FilterNode"""
def _get_filter(self):
params_text = self._name
arg_params = ['{}'.format(arg) for arg in self._args]
kwarg_params = ['{}={}'.format(k, self._kwargs[k]) for k in sorted(self._kwargs)]
def _get_filter(self, outgoing_edges):
args = self.args
kwargs = self.kwargs
if self.name in ('split', 'asplit'):
args = [len(outgoing_edges)]
out_args = [escape_chars(x, '\\\'=:') for x in args]
out_kwargs = {}
for k, v in list(kwargs.items()):
k = escape_chars(k, '\\\'=:')
v = escape_chars(v, '\\\'=:')
out_kwargs[k] = v
arg_params = [escape_chars(v, '\\\'=:') for v in out_args]
kwarg_params = ['{}={}'.format(k, out_kwargs[k]) for k in sorted(out_kwargs)]
params = arg_params + kwarg_params
params_text = escape_chars(self.name, '\\\'=:')
if params:
params_text += '={}'.format(':'.join(params))
return params_text
return escape_chars(params_text, '\\\'[],;')
# noinspection PyMethodOverriding
class OutputNode(Node):
"""OutputNode"""
pass
def __init__(self, stream, name, args=[], kwargs={}):
super(OutputNode, self).__init__(
stream_spec=stream,
name=name,
incoming_stream_types={FilterableStream},
outgoing_stream_type=OutputStream,
min_inputs=1,
max_inputs=None,
args=args,
kwargs=kwargs,
)
@property
def short_repr(self):
return os.path.basename(self.kwargs['filename'])
class OutputStream(Stream):
def __init__(self, upstream_node, upstream_label, upstream_selector=None):
super(OutputStream, self).__init__(
upstream_node,
upstream_label,
{OutputNode, GlobalNode, MergeOutputsNode},
upstream_selector=upstream_selector,
)
# noinspection PyMethodOverriding
class MergeOutputsNode(Node):
def __init__(self, streams, name):
super(MergeOutputsNode, self).__init__(
stream_spec=streams,
name=name,
incoming_stream_types={OutputStream},
outgoing_stream_type=OutputStream,
min_inputs=1,
max_inputs=None,
)
# noinspection PyMethodOverriding
class GlobalNode(Node):
def __init__(self, parent, name, *args, **kwargs):
assert isinstance(parent, OutputNode), 'Global nodes can only be attached after output nodes'
super(GlobalNode, self).__init__([parent], name, *args, **kwargs)
def __init__(self, stream, name, args=[], kwargs={}):
super(GlobalNode, self).__init__(
stream_spec=stream,
name=name,
incoming_stream_types={OutputStream},
outgoing_stream_type=OutputStream,
min_inputs=1,
max_inputs=1,
args=args,
kwargs=kwargs,
)
def operator(node_classes={Node}, name=None):
def stream_operator(stream_classes={Stream}, name=None):
def decorator(func):
func_name = name or func.__name__
[setattr(node_class, func_name, func) for node_class in node_classes]
[setattr(stream_class, func_name, func) for stream_class in stream_classes]
return func
return decorator
def filter_operator(name=None):
return stream_operator(stream_classes={FilterableStream}, name=name)
def output_operator(name=None):
return stream_operator(stream_classes={OutputStream}, name=name)
__all__ = ['Stream']

Binary file not shown.

View File

@ -1,21 +1,44 @@
from __future__ import unicode_literals
from builtins import bytes
from builtins import range
from builtins import str
import ffmpeg
import os
import pytest
import subprocess
import random
import re
import subprocess
import sys
try:
import mock # python 2
except ImportError:
from unittest import mock # python 3
TEST_DIR = os.path.dirname(__file__)
SAMPLE_DATA_DIR = os.path.join(TEST_DIR, 'sample_data')
TEST_INPUT_FILE = os.path.join(SAMPLE_DATA_DIR, 'dummy.mp4')
TEST_INPUT_FILE1 = os.path.join(SAMPLE_DATA_DIR, 'in1.mp4')
TEST_OVERLAY_FILE = os.path.join(SAMPLE_DATA_DIR, 'overlay.png')
TEST_OUTPUT_FILE = os.path.join(SAMPLE_DATA_DIR, 'dummy2.mp4')
TEST_OUTPUT_FILE1 = os.path.join(SAMPLE_DATA_DIR, 'out1.mp4')
TEST_OUTPUT_FILE2 = os.path.join(SAMPLE_DATA_DIR, 'out2.mp4')
BOGUS_INPUT_FILE = os.path.join(SAMPLE_DATA_DIR, 'bogus')
subprocess.check_call(['ffmpeg', '-version'])
def test_escape_chars():
assert ffmpeg._utils.escape_chars('a:b', ':') == r'a\:b'
assert ffmpeg._utils.escape_chars('a\\:b', ':\\') == 'a\\\\\\:b'
assert (
ffmpeg._utils.escape_chars('a:b,c[d]e%{}f\'g\'h\\i', '\\\':,[]%')
== 'a\\:b\\,c\\[d\\]e\\%{}f\\\'g\\\'h\\\\i'
)
assert ffmpeg._utils.escape_chars(123, ':\\') == '123'
def test_fluent_equality():
base1 = ffmpeg.input('dummy1.mp4')
base2 = ffmpeg.input('dummy1.mp4')
@ -39,134 +62,625 @@ def test_fluent_concat():
concat1 = ffmpeg.concat(trimmed1, trimmed2, trimmed3)
concat2 = ffmpeg.concat(trimmed1, trimmed2, trimmed3)
concat3 = ffmpeg.concat(trimmed1, trimmed3, trimmed2)
concat4 = ffmpeg.concat()
concat5 = ffmpeg.concat()
assert concat1 == concat2
assert concat1 != concat3
assert concat4 == concat5
def test_fluent_output():
(ffmpeg
.input('dummy.mp4')
.trim(start_frame=10, end_frame=20)
.output('dummy2.mp4')
)
ffmpeg.input('dummy.mp4').trim(start_frame=10, end_frame=20).output('dummy2.mp4')
def test_fluent_complex_filter():
in_file = ffmpeg.input('dummy.mp4')
return (ffmpeg
.concat(
in_file.trim(start_frame=10, end_frame=20),
in_file.trim(start_frame=30, end_frame=40),
in_file.trim(start_frame=50, end_frame=60)
)
.output('dummy2.mp4')
)
return ffmpeg.concat(
in_file.trim(start_frame=10, end_frame=20),
in_file.trim(start_frame=30, end_frame=40),
in_file.trim(start_frame=50, end_frame=60),
).output('dummy2.mp4')
def test_repr():
def test_node_repr():
in_file = ffmpeg.input('dummy.mp4')
trim1 = ffmpeg.trim(in_file, start_frame=10, end_frame=20)
trim2 = ffmpeg.trim(in_file, start_frame=30, end_frame=40)
trim3 = ffmpeg.trim(in_file, start_frame=50, end_frame=60)
concatted = ffmpeg.concat(trim1, trim2, trim3)
output = ffmpeg.output(concatted, 'dummy2.mp4')
assert repr(in_file) == "input(filename={!r})".format('dummy.mp4')
assert repr(trim1) == "trim(end_frame=20,start_frame=10)"
assert repr(trim2) == "trim(end_frame=40,start_frame=30)"
assert repr(trim3) == "trim(end_frame=60,start_frame=50)"
assert repr(concatted) == "concat(n=3)"
assert repr(output) == "output(filename={!r})".format('dummy2.mp4')
assert repr(in_file.node) == 'input(filename={!r}) <{}>'.format(
'dummy.mp4', in_file.node.short_hash
)
assert repr(trim1.node) == 'trim(end_frame=20, start_frame=10) <{}>'.format(
trim1.node.short_hash
)
assert repr(trim2.node) == 'trim(end_frame=40, start_frame=30) <{}>'.format(
trim2.node.short_hash
)
assert repr(trim3.node) == 'trim(end_frame=60, start_frame=50) <{}>'.format(
trim3.node.short_hash
)
assert repr(concatted.node) == 'concat(n=3) <{}>'.format(concatted.node.short_hash)
assert repr(output.node) == 'output(filename={!r}) <{}>'.format(
'dummy2.mp4', output.node.short_hash
)
def test_get_args_simple():
def test_stream_repr():
in_file = ffmpeg.input('dummy.mp4')
assert repr(in_file) == 'input(filename={!r})[None] <{}>'.format(
'dummy.mp4', in_file.node.short_hash
)
split0 = in_file.filter_multi_output('split')[0]
assert repr(split0) == 'split()[0] <{}>'.format(split0.node.short_hash)
dummy_out = in_file.filter_multi_output('dummy')['out']
assert repr(dummy_out) == 'dummy()[{!r}] <{}>'.format(
dummy_out.label, dummy_out.node.short_hash
)
def test_repeated_args():
out_file = ffmpeg.input('dummy.mp4').output(
'dummy2.mp4', streamid=['0:0x101', '1:0x102']
)
assert out_file.get_args() == [
'-i',
'dummy.mp4',
'-streamid',
'0:0x101',
'-streamid',
'1:0x102',
'dummy2.mp4',
]
def test__get_args__simple():
out_file = ffmpeg.input('dummy.mp4').output('dummy2.mp4')
assert out_file.get_args() == ['-i', 'dummy.mp4', 'dummy2.mp4']
def test_global_args():
out_file = (
ffmpeg.input('dummy.mp4')
.output('dummy2.mp4')
.global_args('-progress', 'someurl')
)
assert out_file.get_args() == [
'-i',
'dummy.mp4',
'dummy2.mp4',
'-progress',
'someurl',
]
def _get_simple_example():
return ffmpeg.input(TEST_INPUT_FILE1).output(TEST_OUTPUT_FILE1)
def _get_complex_filter_example():
in_file = ffmpeg.input(TEST_INPUT_FILE)
split = ffmpeg.input(TEST_INPUT_FILE1).vflip().split()
split0 = split[0]
split1 = split[1]
overlay_file = ffmpeg.input(TEST_OVERLAY_FILE)
return (ffmpeg
.concat(
in_file.trim(start_frame=10, end_frame=20),
in_file.trim(start_frame=30, end_frame=40),
overlay_file = ffmpeg.crop(overlay_file, 10, 10, 158, 112)
return (
ffmpeg.concat(
split0.trim(start_frame=10, end_frame=20),
split1.trim(start_frame=30, end_frame=40),
)
.overlay(overlay_file.hflip())
.drawbox(50, 50, 120, 120, color='red', thickness=5)
.output(TEST_OUTPUT_FILE)
.output(TEST_OUTPUT_FILE1)
.overwrite_output()
)
def test_get_args_complex_filter():
def test__get_args__complex_filter():
out = _get_complex_filter_example()
args = ffmpeg.get_args(out)
assert args == [
'-i', TEST_INPUT_FILE,
'-i', TEST_OVERLAY_FILE,
'-i',
TEST_INPUT_FILE1,
'-i',
TEST_OVERLAY_FILE,
'-filter_complex',
'[0]trim=end_frame=20:start_frame=10[v0];' \
'[0]trim=end_frame=40:start_frame=30[v1];' \
'[v0][v1]concat=n=2[v2];' \
'[1]hflip[v3];' \
'[v2][v3]overlay=eof_action=repeat[v4];' \
'[v4]drawbox=50:50:120:120:red:t=5[v5]',
'-map', '[v5]', os.path.join(SAMPLE_DATA_DIR, 'dummy2.mp4'),
'-y'
'[0]vflip[s0];'
'[s0]split=2[s1][s2];'
'[s1]trim=end_frame=20:start_frame=10[s3];'
'[s2]trim=end_frame=40:start_frame=30[s4];'
'[s3][s4]concat=n=2[s5];'
'[1]crop=158:112:10:10[s6];'
'[s6]hflip[s7];'
'[s5][s7]overlay=eof_action=repeat[s8];'
'[s8]drawbox=50:50:120:120:red:t=5[s9]',
'-map',
'[s9]',
TEST_OUTPUT_FILE1,
'-y',
]
#def test_version():
def test_combined_output():
i1 = ffmpeg.input(TEST_INPUT_FILE1)
i2 = ffmpeg.input(TEST_OVERLAY_FILE)
out = ffmpeg.output(i1, i2, TEST_OUTPUT_FILE1)
assert out.get_args() == [
'-i',
TEST_INPUT_FILE1,
'-i',
TEST_OVERLAY_FILE,
'-map',
'0',
'-map',
'1',
TEST_OUTPUT_FILE1,
]
@pytest.mark.parametrize('use_shorthand', [True, False])
def test_filter_with_selector(use_shorthand):
i = ffmpeg.input(TEST_INPUT_FILE1)
if use_shorthand:
v1 = i.video.hflip()
a1 = i.audio.filter('aecho', 0.8, 0.9, 1000, 0.3)
else:
v1 = i['v'].hflip()
a1 = i['a'].filter('aecho', 0.8, 0.9, 1000, 0.3)
out = ffmpeg.output(a1, v1, TEST_OUTPUT_FILE1)
assert out.get_args() == [
'-i',
TEST_INPUT_FILE1,
'-filter_complex',
'[0:a]aecho=0.8:0.9:1000:0.3[s0];' '[0:v]hflip[s1]',
'-map',
'[s0]',
'-map',
'[s1]',
TEST_OUTPUT_FILE1,
]
def test_get_item_with_bad_selectors():
input = ffmpeg.input(TEST_INPUT_FILE1)
with pytest.raises(ValueError) as excinfo:
input['a']['a']
assert str(excinfo.value).startswith('Stream already has a selector:')
with pytest.raises(TypeError) as excinfo:
input[:'a']
assert str(excinfo.value).startswith("Expected string index (e.g. 'a')")
with pytest.raises(TypeError) as excinfo:
input[5]
assert str(excinfo.value).startswith("Expected string index (e.g. 'a')")
def _get_complex_filter_asplit_example():
split = ffmpeg.input(TEST_INPUT_FILE1).vflip().asplit()
split0 = split[0]
split1 = split[1]
return (
ffmpeg.concat(
split0.filter('atrim', start=10, end=20),
split1.filter('atrim', start=30, end=40),
)
.output(TEST_OUTPUT_FILE1)
.overwrite_output()
)
def test_filter_concat__video_only():
in1 = ffmpeg.input('in1.mp4')
in2 = ffmpeg.input('in2.mp4')
args = ffmpeg.concat(in1, in2).output('out.mp4').get_args()
assert args == [
'-i',
'in1.mp4',
'-i',
'in2.mp4',
'-filter_complex',
'[0][1]concat=n=2[s0]',
'-map',
'[s0]',
'out.mp4',
]
def test_filter_concat__audio_only():
in1 = ffmpeg.input('in1.mp4')
in2 = ffmpeg.input('in2.mp4')
args = ffmpeg.concat(in1, in2, v=0, a=1).output('out.mp4').get_args()
assert args == [
'-i',
'in1.mp4',
'-i',
'in2.mp4',
'-filter_complex',
'[0][1]concat=a=1:n=2:v=0[s0]',
'-map',
'[s0]',
'out.mp4',
]
def test_filter_concat__audio_video():
in1 = ffmpeg.input('in1.mp4')
in2 = ffmpeg.input('in2.mp4')
joined = ffmpeg.concat(in1.video, in1.audio, in2.hflip(), in2['a'], v=1, a=1).node
args = ffmpeg.output(joined[0], joined[1], 'out.mp4').get_args()
assert args == [
'-i',
'in1.mp4',
'-i',
'in2.mp4',
'-filter_complex',
'[1]hflip[s0];[0:v][0:a][s0][1:a]concat=a=1:n=2:v=1[s1][s2]',
'-map',
'[s1]',
'-map',
'[s2]',
'out.mp4',
]
def test_filter_concat__wrong_stream_count():
in1 = ffmpeg.input('in1.mp4')
in2 = ffmpeg.input('in2.mp4')
with pytest.raises(ValueError) as excinfo:
ffmpeg.concat(in1.video, in1.audio, in2.hflip(), v=1, a=1).node
assert (
str(excinfo.value)
== 'Expected concat input streams to have length multiple of 2 (v=1, a=1); got 3'
)
def test_filter_asplit():
out = _get_complex_filter_asplit_example()
args = out.get_args()
assert args == [
'-i',
TEST_INPUT_FILE1,
'-filter_complex',
(
'[0]vflip[s0];'
'[s0]asplit=2[s1][s2];'
'[s1]atrim=end=20:start=10[s3];'
'[s2]atrim=end=40:start=30[s4];'
'[s3][s4]concat=n=2[s5]'
),
'-map',
'[s5]',
TEST_OUTPUT_FILE1,
'-y',
]
def test__output__bitrate():
args = (
ffmpeg.input('in')
.output('out', video_bitrate=1000, audio_bitrate=200)
.get_args()
)
assert args == ['-i', 'in', '-b:v', '1000', '-b:a', '200', 'out']
@pytest.mark.parametrize('video_size', [(320, 240), '320x240'])
def test__output__video_size(video_size):
args = ffmpeg.input('in').output('out', video_size=video_size).get_args()
assert args == ['-i', 'in', '-video_size', '320x240', 'out']
def test_filter_normal_arg_escape():
"""Test string escaping of normal filter args (e.g. ``font`` param of ``drawtext``
filter).
"""
def _get_drawtext_font_repr(font):
"""Build a command-line arg using drawtext ``font`` param and extract the
``-filter_complex`` arg.
"""
args = (
ffmpeg.input('in')
.drawtext('test', font='a{}b'.format(font))
.output('out')
.get_args()
)
assert args[:3] == ['-i', 'in', '-filter_complex']
assert args[4:] == ['-map', '[s0]', 'out']
match = re.match(
r'\[0\]drawtext=font=a((.|\n)*)b:text=test\[s0\]',
args[3],
re.MULTILINE,
)
assert match is not None, 'Invalid -filter_complex arg: {!r}'.format(args[3])
return match.group(1)
expected_backslash_counts = {
'x': 0,
'\'': 3,
'\\': 3,
'%': 0,
':': 2,
',': 1,
'[': 1,
']': 1,
'=': 2,
'\n': 0,
}
for ch, expected_backslash_count in list(expected_backslash_counts.items()):
expected = '{}{}'.format('\\' * expected_backslash_count, ch)
actual = _get_drawtext_font_repr(ch)
assert expected == actual
def test_filter_text_arg_str_escape():
"""Test string escaping of normal filter args (e.g. ``text`` param of ``drawtext``
filter).
"""
def _get_drawtext_text_repr(text):
"""Build a command-line arg using drawtext ``text`` param and extract the
``-filter_complex`` arg.
"""
args = ffmpeg.input('in').drawtext('a{}b'.format(text)).output('out').get_args()
assert args[:3] == ['-i', 'in', '-filter_complex']
assert args[4:] == ['-map', '[s0]', 'out']
match = re.match(r'\[0\]drawtext=text=a((.|\n)*)b\[s0\]', args[3], re.MULTILINE)
assert match is not None, 'Invalid -filter_complex arg: {!r}'.format(args[3])
return match.group(1)
expected_backslash_counts = {
'x': 0,
'\'': 7,
'\\': 7,
'%': 4,
':': 2,
',': 1,
'[': 1,
']': 1,
'=': 2,
'\n': 0,
}
for ch, expected_backslash_count in list(expected_backslash_counts.items()):
expected = '{}{}'.format('\\' * expected_backslash_count, ch)
actual = _get_drawtext_text_repr(ch)
assert expected == actual
# def test_version():
# subprocess.check_call(['ffmpeg', '-version'])
def test_run():
node = _get_complex_filter_example()
ffmpeg.run(node)
def test_run_dummy_cmd():
node = _get_complex_filter_example()
ffmpeg.run(node, cmd='true')
def test_run_dummy_cmd_list():
node = _get_complex_filter_example()
ffmpeg.run(node, cmd=['true', 'ignored'])
def test_run_failing_cmd():
node = _get_complex_filter_example()
with pytest.raises(subprocess.CalledProcessError):
ffmpeg.run(node, cmd='false')
def test_custom_filter():
node = ffmpeg.input('dummy.mp4')
node = ffmpeg.filter_(node, 'custom_filter', 'a', 'b', kwarg1='c')
node = ffmpeg.output(node, 'dummy2.mp4')
assert node.get_args() == [
'-i', 'dummy.mp4',
'-filter_complex', '[0]custom_filter=a:b:kwarg1=c[v0]',
'-map', '[v0]',
'dummy2.mp4'
def test__compile():
out_file = ffmpeg.input('dummy.mp4').output('dummy2.mp4')
assert out_file.compile() == ['ffmpeg', '-i', 'dummy.mp4', 'dummy2.mp4']
assert out_file.compile(cmd='ffmpeg.old') == [
'ffmpeg.old',
'-i',
'dummy.mp4',
'dummy2.mp4',
]
def test_custom_filter_fluent():
node = (ffmpeg
.input('dummy.mp4')
.filter_('custom_filter', 'a', 'b', kwarg1='c')
@pytest.mark.parametrize('pipe_stdin', [True, False])
@pytest.mark.parametrize('pipe_stdout', [True, False])
@pytest.mark.parametrize('pipe_stderr', [True, False])
@pytest.mark.parametrize('cwd', [None, '/tmp'])
def test__run_async(mocker, pipe_stdin, pipe_stdout, pipe_stderr, cwd):
process__mock = mock.Mock()
popen__mock = mocker.patch.object(subprocess, 'Popen', return_value=process__mock)
stream = _get_simple_example()
process = ffmpeg.run_async(
stream,
pipe_stdin=pipe_stdin,
pipe_stdout=pipe_stdout,
pipe_stderr=pipe_stderr,
cwd=cwd,
)
assert process is process__mock
expected_stdin = subprocess.PIPE if pipe_stdin else None
expected_stdout = subprocess.PIPE if pipe_stdout else None
expected_stderr = subprocess.PIPE if pipe_stderr else None
(args,), kwargs = popen__mock.call_args
assert args == ffmpeg.compile(stream)
assert kwargs == dict(
stdin=expected_stdin,
stdout=expected_stdout,
stderr=expected_stderr,
cwd=cwd,
)
def test__run():
stream = _get_complex_filter_example()
out, err = ffmpeg.run(stream)
assert out is None
assert err is None
@pytest.mark.parametrize('capture_stdout', [True, False])
@pytest.mark.parametrize('capture_stderr', [True, False])
def test__run__capture_out(mocker, capture_stdout, capture_stderr):
mocker.patch.object(ffmpeg._run, 'compile', return_value=['echo', 'test'])
stream = _get_simple_example()
out, err = ffmpeg.run(
stream, capture_stdout=capture_stdout, capture_stderr=capture_stderr
)
if capture_stdout:
assert out == 'test\n'.encode()
else:
assert out is None
if capture_stderr:
assert err == ''.encode()
else:
assert err is None
def test__run__input_output(mocker):
mocker.patch.object(ffmpeg._run, 'compile', return_value=['cat'])
stream = _get_simple_example()
out, err = ffmpeg.run(stream, input='test'.encode(), capture_stdout=True)
assert out == 'test'.encode()
assert err is None
@pytest.mark.parametrize('capture_stdout', [True, False])
@pytest.mark.parametrize('capture_stderr', [True, False])
def test__run__error(mocker, capture_stdout, capture_stderr):
mocker.patch.object(ffmpeg._run, 'compile', return_value=['ffmpeg'])
stream = _get_complex_filter_example()
with pytest.raises(ffmpeg.Error) as excinfo:
out, err = ffmpeg.run(
stream, capture_stdout=capture_stdout, capture_stderr=capture_stderr
)
assert str(excinfo.value) == 'ffmpeg error (see stderr output for detail)'
out = excinfo.value.stdout
err = excinfo.value.stderr
if capture_stdout:
assert out == ''.encode()
else:
assert out is None
if capture_stderr:
assert err.decode().startswith('ffmpeg version')
else:
assert err is None
def test__run__multi_output():
in_ = ffmpeg.input(TEST_INPUT_FILE1)
out1 = in_.output(TEST_OUTPUT_FILE1)
out2 = in_.output(TEST_OUTPUT_FILE2)
ffmpeg.run([out1, out2], overwrite_output=True)
def test__run__dummy_cmd():
stream = _get_complex_filter_example()
ffmpeg.run(stream, cmd='true')
def test__run__dummy_cmd_list():
stream = _get_complex_filter_example()
ffmpeg.run(stream, cmd=['true', 'ignored'])
def test__filter__custom():
stream = ffmpeg.input('dummy.mp4')
stream = ffmpeg.filter(stream, 'custom_filter', 'a', 'b', kwarg1='c')
stream = ffmpeg.output(stream, 'dummy2.mp4')
assert stream.get_args() == [
'-i',
'dummy.mp4',
'-filter_complex',
'[0]custom_filter=a:b:kwarg1=c[s0]',
'-map',
'[s0]',
'dummy2.mp4',
]
def test__filter__custom_fluent():
stream = (
ffmpeg.input('dummy.mp4')
.filter('custom_filter', 'a', 'b', kwarg1='c')
.output('dummy2.mp4')
)
assert node.get_args() == [
'-i', 'dummy.mp4',
'-filter_complex', '[0]custom_filter=a:b:kwarg1=c[v0]',
'-map', '[v0]',
'dummy2.mp4'
assert stream.get_args() == [
'-i',
'dummy.mp4',
'-filter_complex',
'[0]custom_filter=a:b:kwarg1=c[s0]',
'-map',
'[s0]',
'dummy2.mp4',
]
def test__merge_outputs():
in_ = ffmpeg.input('in.mp4')
out1 = in_.output('out1.mp4')
out2 = in_.output('out2.mp4')
assert ffmpeg.merge_outputs(out1, out2).get_args() == [
'-i',
'in.mp4',
'out1.mp4',
'out2.mp4',
]
assert ffmpeg.get_args([out1, out2]) == ['-i', 'in.mp4', 'out2.mp4', 'out1.mp4']
def test__input__start_time():
assert ffmpeg.input('in', ss=10.5).output('out').get_args() == [
'-ss',
'10.5',
'-i',
'in',
'out',
]
assert ffmpeg.input('in', ss=0.0).output('out').get_args() == [
'-ss',
'0.0',
'-i',
'in',
'out',
]
def test_multi_passthrough():
out1 = ffmpeg.input('in1.mp4').output('out1.mp4')
out2 = ffmpeg.input('in2.mp4').output('out2.mp4')
out = ffmpeg.merge_outputs(out1, out2)
assert ffmpeg.get_args(out) == [
'-i',
'in1.mp4',
'-i',
'in2.mp4',
'out1.mp4',
'-map',
'1',
'out2.mp4',
]
assert ffmpeg.get_args([out1, out2]) == [
'-i',
'in2.mp4',
'-i',
'in1.mp4',
'out2.mp4',
'-map',
'1',
'out1.mp4',
]
def test_passthrough_selectors():
i1 = ffmpeg.input(TEST_INPUT_FILE1)
args = ffmpeg.output(i1['1'], i1['2'], TEST_OUTPUT_FILE1).get_args()
assert args == [
'-i',
TEST_INPUT_FILE1,
'-map',
'0:1',
'-map',
'0:2',
TEST_OUTPUT_FILE1,
]
def test_mixed_passthrough_selectors():
i1 = ffmpeg.input(TEST_INPUT_FILE1)
args = ffmpeg.output(i1['1'].hflip(), i1['2'], TEST_OUTPUT_FILE1).get_args()
assert args == [
'-i',
TEST_INPUT_FILE1,
'-filter_complex',
'[0:1]hflip[s0]',
'-map',
'[s0]',
'-map',
'0:2',
TEST_OUTPUT_FILE1,
]
@ -177,33 +691,131 @@ def test_pipe():
frame_count = 10
start_frame = 2
out = (ffmpeg
.input('pipe:0', format='rawvideo', pixel_format='rgb24', video_size=(width, height), framerate=10)
out = (
ffmpeg.input(
'pipe:0',
format='rawvideo',
pixel_format='rgb24',
video_size=(width, height),
framerate=10,
)
.trim(start_frame=start_frame)
.output('pipe:1', format='rawvideo')
)
args = out.get_args()
assert args == [
'-f', 'rawvideo',
'-video_size', '{}x{}'.format(width, height),
'-framerate', '10',
'-pixel_format', 'rgb24',
'-i', 'pipe:0',
'-f',
'rawvideo',
'-video_size',
'{}x{}'.format(width, height),
'-framerate',
'10',
'-pixel_format',
'rgb24',
'-i',
'pipe:0',
'-filter_complex',
'[0]trim=start_frame=2[v0]',
'-map', '[v0]',
'-f', 'rawvideo',
'pipe:1'
'[0]trim=start_frame=2[s0]',
'-map',
'[s0]',
'-f',
'rawvideo',
'pipe:1',
]
cmd = ['ffmpeg'] + args
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p = subprocess.Popen(
cmd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
in_data = bytes(bytearray([random.randint(0,255) for _ in range(frame_size * frame_count)]))
p.stdin.write(in_data) # note: this could block, in which case need to use threads
in_data = bytes(
bytearray([random.randint(0, 255) for _ in range(frame_size * frame_count)])
)
p.stdin.write(in_data) # note: this could block, in which case need to use threads
p.stdin.close()
out_data = p.stdout.read()
assert len(out_data) == frame_size * (frame_count - start_frame)
assert out_data == in_data[start_frame*frame_size:]
assert out_data == in_data[start_frame * frame_size :]
def test__probe():
data = ffmpeg.probe(TEST_INPUT_FILE1)
assert set(data.keys()) == {'format', 'streams'}
assert data['format']['duration'] == '7.036000'
@pytest.mark.skipif(sys.version_info < (3, 3), reason='requires python3.3 or higher')
def test__probe_timeout():
with pytest.raises(subprocess.TimeoutExpired) as excinfo:
ffmpeg.probe(TEST_INPUT_FILE1, timeout=0)
assert 'timed out after 0 seconds' in str(excinfo.value)
def test__probe__exception():
with pytest.raises(ffmpeg.Error) as excinfo:
ffmpeg.probe(BOGUS_INPUT_FILE)
assert str(excinfo.value) == 'ffprobe error (see stderr output for detail)'
assert 'No such file or directory'.encode() in excinfo.value.stderr
def test__probe__extra_args():
data = ffmpeg.probe(TEST_INPUT_FILE1, show_frames=None)
assert set(data.keys()) == {'format', 'streams', 'frames'}
def get_filter_complex_input(flt, name):
m = re.search(r'\[([^]]+)\]{}(?=[[;]|$)'.format(name), flt)
if m:
return m.group(1)
else:
return None
def get_filter_complex_outputs(flt, name):
m = re.search(r'(^|[];]){}((\[[^]]+\])+)(?=;|$)'.format(name), flt)
if m:
return m.group(2)[1:-1].split('][')
else:
return None
def test__get_filter_complex_input():
assert get_filter_complex_input('', 'scale') is None
assert get_filter_complex_input('scale', 'scale') is None
assert get_filter_complex_input('scale[s3][s4];etc', 'scale') is None
assert get_filter_complex_input('[s2]scale', 'scale') == 's2'
assert get_filter_complex_input('[s2]scale;etc', 'scale') == 's2'
assert get_filter_complex_input('[s2]scale[s3][s4];etc', 'scale') == 's2'
def test__get_filter_complex_outputs():
assert get_filter_complex_outputs('', 'scale') is None
assert get_filter_complex_outputs('scale', 'scale') is None
assert get_filter_complex_outputs('scalex[s0][s1]', 'scale') is None
assert get_filter_complex_outputs('scale[s0][s1]', 'scale') == ['s0', 's1']
assert get_filter_complex_outputs('[s5]scale[s0][s1]', 'scale') == ['s0', 's1']
assert get_filter_complex_outputs('[s5]scale[s1][s0]', 'scale') == ['s1', 's0']
assert get_filter_complex_outputs('[s5]scale[s1]', 'scale') == ['s1']
assert get_filter_complex_outputs('[s5]scale[s1];x', 'scale') == ['s1']
assert get_filter_complex_outputs('y;[s5]scale[s1];x', 'scale') == ['s1']
def test__multi_output_edge_label_order():
scale2ref = ffmpeg.filter_multi_output(
[ffmpeg.input('x'), ffmpeg.input('y')], 'scale2ref'
)
out = ffmpeg.merge_outputs(
scale2ref[1].filter('scale').output('a'),
scale2ref[10000].filter('hflip').output('b'),
)
args = out.get_args()
flt_cmpl = args[args.index('-filter_complex') + 1]
out1, out2 = get_filter_complex_outputs(flt_cmpl, 'scale2ref')
assert out1 == get_filter_complex_input(flt_cmpl, 'scale')
assert out2 == get_filter_complex_input(flt_cmpl, 'hflip')

15
pyproject.toml Normal file
View File

@ -0,0 +1,15 @@
[tool.black]
skip-string-normalization = true
target_version = ['py27'] # TODO: drop Python 2 support (... "Soon").
include = '\.pyi?$'
exclude = '''
(
/(
\.eggs
| \.git
| \.tox
| \venv
| dist
)/
)
'''

View File

@ -1,5 +1,40 @@
future
pytest
pytest-runner
sphinx
tox
alabaster==0.7.12
atomicwrites==1.3.0
attrs==19.1.0
Babel==2.7.0
certifi==2019.3.9
chardet==3.0.4
docutils==0.14
filelock==3.0.12
future==0.17.1
idna==2.8
imagesize==1.1.0
importlib-metadata==0.17
Jinja2==2.10.1
MarkupSafe==1.1.1
more-itertools==7.0.0
numpy==1.16.4
packaging==19.0
pluggy==0.12.0
py==1.8.0
Pygments==2.4.2
pyparsing==2.4.0
pytest==4.6.1
pytest-mock==1.10.4
pytz==2019.1
requests==2.22.0
six==1.12.0
snowballstemmer==1.2.1
Sphinx==2.1.0
sphinxcontrib-applehelp==1.0.1
sphinxcontrib-devhelp==1.0.1
sphinxcontrib-htmlhelp==1.0.2
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.2
sphinxcontrib-serializinghtml==1.1.3
toml==0.10.0
tox==3.12.1
urllib3==1.25.3
virtualenv==16.6.0
wcwidth==0.1.7
zipp==0.5.1

View File

@ -1,32 +1,27 @@
from setuptools import setup
from textwrap import dedent
import subprocess
version = '0.2.0'
download_url = 'https://github.com/kkroening/ffmpeg-python/archive/v{}.zip'.format(
version
)
def get_current_commit_hash():
p = subprocess.Popen(['git', 'rev-parse', 'HEAD'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
commit_hash = p.communicate()[0].strip()
return commit_hash
long_description = dedent("""\
long_description = dedent(
'''\
ffmpeg-python: Python bindings for FFmpeg
=========================================
:Github: https://github.com/kkroening/ffmpeg-python
:API Reference: https://kkroening.github.io/ffmpeg-python/
""")
'''
)
commit_hash = get_current_commit_hash()
download_url = 'https://github.com/kkroening/ffmpeg-python/archive/{}.zip'.format(commit_hash)
file_formats = [
'aac',
'ac3',
'avi',
'bmp'
'bmp',
'flac',
'gif',
'mov',
@ -65,10 +60,8 @@ keywords = misc_keywords + file_formats
setup(
name='ffmpeg-python',
packages=['ffmpeg'],
setup_requires=['pytest-runner'],
tests_require=['pytest'],
version='0.1.6',
description='Python bindings for FFmpeg - with support for complex filtering',
version=version,
description='Python bindings for FFmpeg - with complex filtering support',
author='Karl Kroening',
author_email='karlk@kralnet.us',
url='https://github.com/kkroening/ffmpeg-python',
@ -76,6 +69,16 @@ setup(
keywords=keywords,
long_description=long_description,
install_requires=['future'],
extras_require={
'dev': [
'future==0.17.1',
'numpy==1.16.4',
'pytest-mock==1.10.4',
'pytest==4.6.1',
'Sphinx==2.1.0',
'tox==3.12.1',
]
},
classifiers=[
'Intended Audience :: Developers',
'License :: OSI Approved :: Apache Software License',
@ -89,5 +92,9 @@ setup(
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
],
)

13
tox.ini
View File

@ -4,10 +4,21 @@
# and then run "tox" from this directory.
[tox]
envlist = py27, py33, py34, py35, py36, pypy
envlist = py27, py35, py36, py37, py38, py39, py310
[gh-actions]
python =
2.7: py27
3.5: py35
3.6: py36
3.7: py37
3.8: py38
3.9: py39
3.10: py310
[testenv]
commands = py.test -vv
deps =
future
pytest
pytest-mock