Pytest单元测试系列[v1.0.0][Pytest基础]

Pytest安装与配置

和Unittest一样,Pytest是另一个Python语言的单元测试框架,与Unittest相比它的测试用例更加容易编写、运行方式更加灵活、报错信息更加清晰、断言写法更简洁并且它可以运行有unittest和nose编写的测试用例。

Pytest 安装

启动命令行,在命令行中使用pip工具安装pytest,如图所示。

C:\Users\Administrator>pip install -U pytest
Collecting pytestUsing cached pytest-5.4.1-py3-none-any.whl (246 kB)
Requirement already satisfied, skipping upgrade: pluggy<1.0,>=0.12 in c:\program files\python38\lib\site-packages (from pytest) (0.13.1)
Requirement already satisfied, skipping upgrade: atomicwrites>=1.0; sys_platform == "win32" in c:\program files\python38\lib\site-packages (from pytest) (1.3.0)
Requirement already satisfied, skipping upgrade: colorama; sys_platform == "win32" in c:\program files\python38\lib\site-packages (from pytest) (0.4.3)
Requirement already satisfied, skipping upgrade: wcwidth in c:\program files\python38\lib\site-packages (from pytest) (0.1.8)
Requirement already satisfied, skipping upgrade: packaging in c:\program files\python38\lib\site-packages (from pytest) (20.3)
Requirement already satisfied, skipping upgrade: attrs>=17.4.0 in c:\program files\python38\lib\site-packages (from pytest) (19.3.0)
Requirement already satisfied, skipping upgrade: more-itertools>=4.0.0 in c:\program files\python38\lib\site-packages (from pytest) (8.2.0)
Requirement already satisfied, skipping upgrade: py>=1.5.0 in c:\program files\python38\lib\site-packages (from pytest) (1.8.1)
Requirement already satisfied, skipping upgrade: six in c:\program files\python38\lib\site-packages (from packaging->pytest) (1.14.0)
Requirement already satisfied, skipping upgrade: pyparsing>=2.0.2 in c:\program files\python38\lib\site-packages (from packaging->pytest) (2.4.6)
Installing collected packages: pytest
Successfully installed pytest-5.4.1

代码示例

新建一个python文件,并写入如下代码:

def test_equal():assert(1,2,3)==(1,2,3)

然后在命令行运行该文件,执行命令为 pytest xxx.py,执行结果如图

C:\Users\Administrator>pytest C:\Users\Administrator\Desktop\123.py
===================================== test session starts ==================================================
platform win32 -- Python 3.8.1, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: C:\Users\Administrator
collected 1 itemDesktop\123.py .                                                                                                 [100%]====================================== 1 passed in 0.09s ==================================================

如果想看到详细的执行结果,可以给执行命令加上参数 -v或者–verbose,即pytest -v xxx.py,执行结果如图

C:\Users\Administrator>pytest -v C:\Users\Administrator\Desktop\123.py
======================================= test session starts =================================================
platform win32 -- Python 3.8.1, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- c:\program files\python38\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\Administrator
collected 1 itemDesktop/123.py::test_equal PASSED                                                                                [100%]============================================ 1 passed in 0.04s ===============================================

我们在看一个执行失败的例子,再新建一个py文件,写入如下代码:

def test_equal():assert(1,2,3)==(3,2,1)

然后执行该文件,pytest -v xxx.py,执行结果如图

C:\Users\Administrator>pytest -v C:\Users\Administrator\Desktop\123.py
=========================================== test session starts ==============================================
platform win32 -- Python 3.8.1, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- c:\program files\python38\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\Administrator
collected 1 itemDesktop/123.py::test_equal FAILED                                                                                [100%]============================================= FAILURES =======================================================
____________________________________________ test_equal ______________________________________________________def test_equal():
>       assert(1,2,3)==(3,2,1)
E    assert (1, 2, 3) == (3, 2, 1)
E      At index 0 diff: 1 != 3
E      Full diff:
E      - (3, 2, 1)
E      ?  ^     ^
E      + (1, 2, 3)
E      ?  ^     ^Desktop\123.py:2: AssertionError
====================================== short test summary info ===============================================
FAILED Desktop/123.py::test_equal - assert (1, 2, 3) == (3, 2, 1)
============================================= 1 failed in 0.20s ==============================================

虽然断言结果是失败,但我们从执行结果中能够很清晰的看到为什么,pytest使用脱字符(^)表明结果中不同的地方

配置Pycharm

在这里插入图片描述

卸载Pytest

C:\Users\Administrator>pip uninstall pytest
Found existing installation: pytest 5.4.1
Uninstalling pytest-5.4.1:Would remove:c:\program files\python38\lib\site-packages\_pytest\*c:\program files\python38\lib\site-packages\pytest-5.4.1.dist-info\*c:\program files\python38\lib\site-packages\pytest\*c:\program files\python38\scripts\py.test.exec:\program files\python38\scripts\pytest.exe
Proceed (y/n)? ySuccessfully uninstalled pytest-5.4.1C:\Users\Administrator>

常用命令行参数

Pytest执行规则

  • 在命令行使用pytest执行测试,完整的pytest命令需要在pytest后加选项和文件名或者路径
  • 如果不提供这些选项或参数,pytest会在当前目录及其子目录下寻找测试文件,然后运行搜索到的测试代码
  • 如果提供了一个或者多个文件名、目录,pytest会逐一查找并运行所有测试,为了搜索到所有的测试代码,pytest会递归遍历每个目录及其子目录,但也只是执行以test_开头或者_test开头的测试函数

pytest搜索测试文件和测试用例的过程称为测试搜索,只要遵循如下几条原则便能够被它搜索到

  • 测试文件应命名为test_(something).py或者(something)_test.py
  • 测试函数、测试类方法应命名为test_(something)
  • 测试类应命名为Test(something)
测试代码

假如有如下待测代码,将其保存在py文件中,文件名为tobetest.py

import pytest# 功能
def add(a, b):return a + b# 测试相等
@allure.step
def test_add():assert add(3, 4) == 7# 测试不相等
@allure.step
def test_add2():assert add(17, 22) != 50# 测试大于
@allure.step
def test_add3():assert add(17, 22) <= 50# 测试小于
@pytest.mark.aaaa
def test_add4():assert add(17, 22) >= 50# 测试相等
def test_in():a = "hello"b = "he"assert b in a# 测试不相等
def test_not_in():a = "hello"b = "hi"assert b not in a# 用于判断素数
def is_prime(n):if n <= 1:return Falsefor i in range(2, n):if n % i == 0:return Falsereturn True# 判断是否为素数
def test_true():assert is_prime(13)# 判断是否不为素数
def test_not_true():assert not is_prime(7)
执行单一文件
E:\Programs\Python\Python_Pytest\TestScripts>pytest tobetest.py
============================================= test session starts ====================================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.13.0
rootdir: E:\Programs\Python\Python_Pytest\TestScripts, inifile: pytest.ini
plugins: allure-pytest-2.6.3, cov-2.7.1, emoji-0.2.0, forked-1.0.2, instafail-0.4.1, nice-0.1.0, repeat-0.8.0, timeout-1.3.3, xdist-1.29.0
collected 8 items                                                                                                                                                          tobetest.py ...F...F                                                                                                                                             [100%]================================================ FAILURES ===========================================================
_________________________________________________ test_add4 __________________________________________________________@pytest.mark.aaaadef test_add4():
>       assert add(17, 22) >= 50
E       assert 39 >= 50
E        +  where 39 = add(17, 22)test_asserts.py:36: AssertionError
_________________________________________________ test_not_true ______________________________________________________def test_not_true():
>       assert not is_prime(7)
E       assert not True
E        +  where True = is_prime(7)test_asserts.py:70: AssertionError
========================================= warnings summary ============================================================
c:\python37\lib\site-packages\_pytest\mark\structures.py:324c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.aaaa - is this a typo?  You can register custom marks to avoidthis warning - for details, see https://docs.pytest.org/en/latest/mark.htmlPytestUnknownMarkWarning,test_asserts.py::test_add4
test_asserts.py::test_not_truec:\python37\lib\site-packages\pytest_nice.py:22: PytestDeprecationWarning: the `pytest.config` global is deprecated.  Please use `request.config` or `pytest_configure` (i
f you're a pytest plugin) instead.if report.failed and pytest.config.getoption('nice'):-- Docs: https://docs.pytest.org/en/latest/warnings.html
================================ 2 failed, 6 passed, 3 warnings in 0.46 seconds =========================================
  • 第一行显示执行代码的操作系统、python版本以及pytest的版本:platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.13.0
  • 第二行显示搜索代码的启示目录以及配置文件,在本例中没有配置文件,因此inifile为空:rootdir: E:\Programs\Python\Python_Pytest\TestScripts, inifile: pytest.ini
  • 第三行显示当前已经安装的pytest插件plugins: allure-pytest-2.6.3, cov-2.7.1, emoji-0.2.0, forked-1.0.2, instafail-0.4.1, nice-0.1.0, repeat-0.8.0, timeout-1.3.3, xdist-1.29.0
  • 第四行 collected 8 items 表示找到8个测试函数。
  • 第五行tobetest.py ...F...F显示的是测试文件名,后边的点表示测试通过,除了点以外,还可能遇到Failure、error(测试异常)、skip、xfail(预期失败并确实失败)、xpass(预期失败但实际通过,不符合预期)分别会显示F、E、s、x、X
  • 2 failed, 6 passed, 3 warnings in 0.46 seconds表示测试结果和执行时间
执行单一测试函数

使用命令 pytest -v 路径/文件名::测试用例函数名执行结果如下:

E:\Programs\Python\Python_Pytest\TestScripts>pytest test_asserts.py::test_true
==================================================== test session starts ===============================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.13.0
rootdir: E:\Programs\Python\Python_Pytest\TestScripts, inifile: pytest.ini
plugins: allure-pytest-2.6.3, cov-2.7.1, emoji-0.2.0, forked-1.0.2, instafail-0.4.1, nice-0.1.0, repeat-0.8.0, timeout-1.3.3, xdist-1.29.0
collected 1 item                                                                                                                                                           test_asserts.py .                                                                                                                                                    [100%]================================================== warnings summary ====================================================
c:\python37\lib\site-packages\_pytest\mark\structures.py:324c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.aaaa - is this a typo?  You can register custom marks to avoidthis warning - for details, see https://docs.pytest.org/en/latest/mark.htmlPytestUnknownMarkWarning,-- Docs: https://docs.pytest.org/en/latest/warnings.html
========================================== 1 passed, 1 warnings in 0.07 seconds =========================================
其他命令行规则
  • 运行某个模块内的某个测试函数pytest test_mod.py::test_func
  • 运行某个模块内某个类的某个测试方法pytest test_mod.py::TestClass::test_method
  • 执行单一测试模块的语法是pytest test_module.py
  • 执行某个目录下的所有测试函数语法是pytest test/

常用pytest命令选项

--collect-only

在批量执行测试用例之前,我们往往会想知道哪些用例将被执行是否符合我们的预期等等,这种场景下可以使用–collect-only选项,如下执行结果所示:

D:\PythonPrograms\Python_Pytest\TestScripts>pytest --collect-only
================================================= test session starts ===================================================
platform win32 -- Python 3.7.2, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\PythonPrograms\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 17 items
<Package 'D:\\PythonPrograms\\Python_Pytest\\TestScripts'><Module 'test_asserts.py'><Function 'test_add'><Function 'test_add2'><Function 'test_add3'><Function 'test_add4'><Function 'test_in'><Function 'test_not_in'><Function 'test_true'><Module 'test_fixture1.py'><Function 'test_numbers_3_4'><Function 'test_strings_a_3'><Module 'test_fixture2.py'><Class 'TestUM'><Function 'test_numbers_5_6'><Function 'test_strings_b_2'><Module 'test_one.py'><Function 'test_equal'><Function 'test_not_equal'><Module 'test_two.py'><Function 'test_default'><Function 'test_member_access'><Function 'test_asdict'><Function 'test_replace'>============================================= no tests ran in 0.09 seconds ==============================================
-k

该选项允许我们使用表达式指定希望运行的测试用例,如果某测试名是唯一的或者多个测试名的前缀或后缀相同,则可以使用这个选项来执行,如下执行结果所示:

D:\PythonPrograms\Python_Pytest\TestScripts>pytest -k "asdict or default" --collect-only
================================================= test session starts ===================================================
platform win32 -- Python 3.7.2, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\PythonPrograms\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 17 items / 15 deselected
<Package 'D:\\PythonPrograms\\Python_Pytest\\TestScripts'><Module 'test_two.py'><Function 'test_default'><Function 'test_asdict'>============================================ 15 deselected in 0.06 seconds ===========================================

从执行结果中我们能看到使用-k和–collect-only组合能够查询到我们设置的参数所能执行的测试方法。
然后我们将–collect-only从命令行移出,只使用-k便可执行test_default和test_asdict了

D:\PythonPrograms\Python_Pytest\TestScripts>pytest -k "asdict or default"
================================================ test session starts =================================================
platform win32 -- Python 3.7.2, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\PythonPrograms\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 17 items / 15 deselectedtest_two.py ..                                                           [100%]============================================= 2 passed, 15 deselected in 0.07 seconds ================================

如果我们在定义用例名的时候特别注意一下便可以使用-k的方式执行一系列测试用例了,同时表达式中科包含 and、or、not

-m

用于标记并分组,然后仅执行带有标记的用例,如此便实现了执行某个测试集合的场景,如下代码所示,给我们之前的两个测试方法添加标记

@pytest.mark.run_these_cases
def test_member_access():"""利用属性名来访问对象成员:return:"""t = Task('buy milk', 'brian')assert t.summary == 'buy milk'assert t.owner == 'brian'assert(t.done,  t.id) == (False, None)@pytest.mark.run_these_cases
def test_asdict():"""_asdict()返回一个字典"""t_task = Task('do something','okken',True, 21)t_dict = t_task._asdict()expected_dict = {'summary': 'do something','owner': 'okken','done': True,'id': 21}assert t_dict == expected_dict

执行命令pytest -v -m run_these_cases,结果如下:

D:\PythonPrograms\Python_Pytest\TestScripts>pytest -v -m run_these_cases
============================================== test session starts ======================================================
platform win32 -- Python 3.7.2, pytest-4.0.2, py-1.8.0, pluggy-0.12.0 -- c:\python37\python.exe
cachedir: .pytest_cache
rootdir: D:\PythonPrograms\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 17 items / 15 deselectedtest_two.py::test_member_access PASSED                                   [ 50%]
test_two.py::test_asdict PASSED                                          [100%]======================================= 2 passed, 15 deselected in 0.07 seconds =========================================

-m选项也可以用表达式指定多个标记名,例如-m “mark1 and mark2” 或者-m “mark1 and not mark2” 或者-m “mark1 or mark2”

-x

Pytest会运行每一个搜索到的测试用例,如果某个测试函数被断言失败,或者触发了外部异常,则该测试用例的运行就会停止,pytest将其标记为失败后继续运行一下测试用例,然而在debug的时候,我们往往希望遇到失败时立刻停止整个会话,-x选项为我们提供了该场景的支持,如下执行结果所示:

E:\Programs\Python\Python_Pytest\TestScripts>pytest -x
=============================================== test session starts =================================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
rootdir: E:\Programs\Python\Python_Pytest\TestScripts
plugins: allure-pytest-2.6.3
collected 17 items                                                                                                                                                                                                                     test_asserts.py ...F=====================================================FAILURES ===========================================================
____________________________________________________test_add4 ___________________________________________________________def test_add4():
>       assert add(17,22) >= 50
E       assert 39 >= 50
E        +  where 39 = add(17, 22)test_asserts.py:34: AssertionError
============================================ warnings summary ===========================================================
c:\python37\lib\site-packages\_pytest\mark\structures.py:324c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.run_these_cases - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pyt
est.org/en/latest/mark.htmlPytestUnknownMarkWarning,-- Docs: https://docs.pytest.org/en/latest/warnings.html
=============================== 1 failed, 3 passed, 1 warnings in 0.41 seconds ==========================================

在执行结果中我们可以看到实际收集的测试用例是17条,但执行了4条,通过3条失败一条,执行便停止了。
如果不适用-x选项再执行一次结果如下:

E:\Programs\Python\Python_Pytest\TestScripts>pytest --tb=no
============================================ test session starts =====================================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
rootdir: E:\Programs\Python\Python_Pytest\TestScripts
plugins: allure-pytest-2.6.3
collected 17 items                                                                                                                                                                                                                     test_asserts.py ...F..F                                                                                                                                                                                                          [ 41%]
test_fixture1.py ..                                                                                                                                                                                                              [ 52%]
test_fixture2.py ..                                                                                                                                                                                                              [ 64%]
test_one.py .F                                                                                                                                                                                                                   [ 76%]
test_two.py ....                                                                                                                                                                                                                 [100%]============================================= warnings summary =======================================================
c:\python37\lib\site-packages\_pytest\mark\structures.py:324c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.run_these_cases - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pyt
est.org/en/latest/mark.htmlPytestUnknownMarkWarning,-- Docs: https://docs.pytest.org/en/latest/warnings.html
==================================== 3 failed, 14 passed, 1 warnings in 0.31 seconds =================================

从执行结果中我们看到一共收集的测试用例为17条,14条通过,3条失败,使用了选项–tb=no关闭错误信息回溯,当我们只想看执行结果而不想看那么多报错信息的时候可以使用它。

--maxfail=num

-x是遇到失败便全局停止,如果我们想遇到失败几次再停止呢?–maxfail选项为我们提供了这个场景的支持,如下执行结果所示:

E:\Programs\Python\Python_Pytest\TestScripts>pytest --maxfail=2 --tb=no
============================================= test session starts =======================================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
rootdir: E:\Programs\Python\Python_Pytest\TestScripts
plugins: allure-pytest-2.6.3
collected 17 items                                                                                                                                                                                                                     test_asserts.py ...F..F================================================= warnings summary ======================================================
c:\python37\lib\site-packages\_pytest\mark\structures.py:324c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.run_these_cases - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pyt
est.org/en/latest/mark.htmlPytestUnknownMarkWarning,-- Docs: https://docs.pytest.org/en/latest/warnings.html
========================================= 2 failed, 5 passed, 1 warnings in 0.22 seconds ================================

从执行结果中我们看到收集了17条用例,执行了7条,当错误数量达到2的时候便停止了执行。

--tb=
命令及参数描述
pytest --showlocals# show local variables in tracebacks
pytest -l# show local variables (shortcut)
pytest --tb=auto# (default) ‘long’ tracebacks for the first and last entry, but ‘short’ style for the other entries
pytest --tb=long# exhaustive, informative traceback formatting
pytest --tb=short# shorter traceback format
pytest --tb=line# only one line per failure
pytest --tb=native# Python standard library formatting
pytest --tb=no# no traceback at all
pytest --full-trace#causes very long traces to be printed on error (longer than --tb=long).
-v (--verbose)

-v, --verbose:increase verbosity.

-q (--quiet)

-q, --quiet:decrease verbosity.

--lf (--last-failed)

–lf, --last-failed:rerun only the tests that failed at the last run (or all if none failed)

E:\Programs\Python\Python_Pytest\TestScripts>pytest --lf --tb=no
================================== test session starts ==========================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
rootdir: E:\Programs\Python\Python_Pytest\TestScripts
plugins: allure-pytest-2.6.3
collected 9 items / 6 deselected / 3 selected                                                                                                                                                                                          
run-last-failure: rerun previous 3 failures (skipped 7 files)
test_asserts.py FF                                                                                                                                                                                                               [ 66%]
test_one.py F                                                                                                                                                                                                                    [100%]
============================= 3 failed, 6 deselected in 0.15 seconds ============================
--ff (--failed-first)

–ff, --failed-first :run all tests but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown

E:\Programs\Python\Python_Pytest\TestScripts>pytest --ff --tb=no
================================= test session starts ==================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
rootdir: E:\Programs\Python\Python_Pytest\TestScripts
plugins: allure-pytest-2.6.3
collected 17 items                                                                                                                                                                                                                     
run-last-failure: rerun previous 3 failures first
test_asserts.py FF                                                                                                                                                                                                               [ 11%]
test_one.py F                                                                                                                                                                                                                    [ 17%]
test_asserts.py .....                                                                                                                                                                                                            [ 47%]
test_fixture1.py ..                                                                                                                                                                                                              [ 58%]
test_fixture2.py ..                                                                                                                                                                                                              [ 70%]
test_one.py .                                                                                                                                                                                                                    [ 76%]
test_two.py ....                                                                                                                                                                                                                 [100%]
======================== warnings summary ==========================================
c:\python37\lib\site-packages\_pytest\mark\structures.py:324c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.run_these_cases - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pyt
est.org/en/latest/mark.htmlPytestUnknownMarkWarning,-- Docs: https://docs.pytest.org/en/latest/warnings.html
================= 3 failed, 14 passed, 1 warnings in 0.25 seconds ==========================
-s与--capture=method

-s等同于--capture=no

 (venv) D:\Python_Pytest\TestScripts>pytest -s
============================= test session starts ============================================
platform win32 -- Python 3.7.3, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 18 items                                                                                                                                                                                                                        test_asserts.py ...F...F
test_fixture1.pysetup_module================>
setup_function------>
test_numbers_3_4
.teardown_function--->
setup_function------>
test_strings_a_3
.teardown_function--->
teardown_module=============>test_fixture2.pysetup_class=========>
setup_method----->>
setup----->
test_numbers_5_6
.teardown-->
teardown_method-->>
setup_method----->>
setup----->
test_strings_b_2
.teardown-->
teardown_method-->>
teardown_class=========>test_one.py .F
test_two.py ....========================================== FAILURES ============================================
____________________________________________ test_add4 ______________________________________________@pytest.mark.aaaadef test_add4():
>       assert add(17, 22) >= 50
E       assert 39 >= 50
E        +  where 39 = add(17, 22)test_asserts.py:36: AssertionError
_____________________________________________________________________________________________________________ test_not_true ______________________________________________________________________________________________________________def test_not_true():
>       assert not is_prime(7)
E       assert not True
E        +  where True = is_prime(7)test_asserts.py:70: AssertionError
_______________________________________ test_not_equal ________________________________________________def test_not_equal():
>       assert (1, 2, 3) == (3, 2, 1)
E       assert (1, 2, 3) == (3, 2, 1)
E         At index 0 diff: 1 != 3
E         Use -v to get the full difftest_one.py:9: AssertionError
================================== 3 failed, 15 passed in 0.15 seconds =================================

--capture=method per-test capturing method: one of fd|sys|no.

-l (--showlocals)

-l, --showlocals show locals in tracebacks (disabled by default).

--duration=N

--durations=N show N slowest setup/test durations (N=0 for all).
该选项绝大多数用于调优测试代码,该选项展示最慢的N个用例,等于0则表示全部倒序

(venv) D:\Python_Pytest\TestScripts>pytest --duration=5
===================================================== test session starts ==============================================
platform win32 -- Python 3.7.3, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 18 items                                                             test_asserts.py ...F...F                                                 [ 44%]
test_fixture1.py ..                                                      [ 55%]
test_fixture2.py ..                                                      [ 66%]
test_one.py .F                                                           [ 77%]
test_two.py ....                                                         [100%]======================================================= FAILURES ======================================================
_______________________________________________________ test_add4 _____________________________________________________@pytest.mark.aaaadef test_add4():
>       assert add(17, 22) >= 50
E       assert 39 >= 50
E        +  where 39 = add(17, 22)test_asserts.py:36: AssertionError
_____________________________________________________ test_not_true _____________________________________________________def test_not_true():
>       assert not is_prime(7)
E       assert not True
E        +  where True = is_prime(7)test_asserts.py:70: AssertionError
___________________________________________________ test_not_equal ______________________________________________________def test_not_equal():
>       assert (1, 2, 3) == (3, 2, 1)
E       assert (1, 2, 3) == (3, 2, 1)
E         At index 0 diff: 1 != 3
E         Use -v to get the full difftest_one.py:9: AssertionError
================================================ slowest 5 test durations ===============================================
0.01s call     test_asserts.py::test_add4(0.00 durations hidden.  Use -vv to show these durations.)
========================================== 3 failed, 15 passed in 0.27 seconds ==========================================

在执行结果中我们看到提示(0.00 durations hidden. Use -vv to show these durations.),如果加上-vv,执行结果如下:

(venv) D:\Python_Pytest\TestScripts>pytest --duration=5 -vv
============================= test session starts =============================
platform win32 -- Python 3.7.3, pytest-4.0.2, py-1.8.0, pluggy-0.12.0 -- c:\python37\python.exe
cachedir: .pytest_cache
rootdir: D:\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 18 items                                                             test_asserts.py::test_add PASSED                                         [  5%]
test_asserts.py::test_add2 PASSED                                        [ 11%]
test_asserts.py::test_add3 PASSED                                        [ 16%]
test_asserts.py::test_add4 FAILED                                        [ 22%]
test_asserts.py::test_in PASSED                                          [ 27%]
test_asserts.py::test_not_in PASSED                                      [ 33%]
test_asserts.py::test_true PASSED                                        [ 38%]
test_asserts.py::test_not_true FAILED                                    [ 44%]
test_fixture1.py::test_numbers_3_4 PASSED                                [ 50%]
test_fixture1.py::test_strings_a_3 PASSED                                [ 55%]
test_fixture2.py::TestUM::test_numbers_5_6 PASSED                        [ 61%]
test_fixture2.py::TestUM::test_strings_b_2 PASSED                        [ 66%]
test_one.py::test_equal PASSED                                           [ 72%]
test_one.py::test_not_equal FAILED                                       [ 77%]
test_two.py::test_default PASSED                                         [ 83%]
test_two.py::test_member_access PASSED                                   [ 88%]
test_two.py::test_asdict PASSED                                          [ 94%]
test_two.py::test_replace PASSED                                         [100%]====================================================== FAILURES =========================================================
______________________________________________________ test_add4 ________________________________________________________@pytest.mark.aaaadef test_add4():
>       assert add(17, 22) >= 50
E       assert 39 >= 50
E        +  where 39 = add(17, 22)test_asserts.py:36: AssertionError
____________________________________________________ test_not_true ____________________________________________________def test_not_true():
>       assert not is_prime(7)
E       assert not True
E        +  where True = is_prime(7)test_asserts.py:70: AssertionError
___________________________________________________ test_not_equal ____________________________________________________def test_not_equal():
>       assert (1, 2, 3) == (3, 2, 1)
E       assert (1, 2, 3) == (3, 2, 1)
E         At index 0 diff: 1 != 3
E         Full diff:
E         - (1, 2, 3)
E         ?  ^     ^
E         + (3, 2, 1)
E         ?  ^     ^test_one.py:9: AssertionError
============================================== slowest 5 test durations ===============================================
0.00s setup    test_one.py::test_not_equal
0.00s setup    test_fixture1.py::test_strings_a_3
0.00s setup    test_asserts.py::test_add3
0.00s call     test_fixture2.py::TestUM::test_strings_b_2
0.00s call     test_asserts.py::test_in
========================================= 3 failed, 15 passed in 0.16 seconds =========================================
-r

生成一个简短的概述报告,同时配合-r还可以使用

OptionDescription
ffailed
Eerror
sskipped
xxfailed
Xxpassed
ppassed
Ppassed with output
aall except pP
Aall
例如只想看失败的和跳过的测试,可以这样执行

(venv) E:\Python_Pytest\TestScripts>pytest -rfs
=================================================== test session starts =================================================
platform win32 -- Python 3.7.3, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
rootdir: E:\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 18 items                                                             test_asserts.py ...F...F                                                 [ 44%]
test_fixture1.py ..                                                      [ 55%]
test_fixture2.py ..                                                      [ 66%]
test_one.py .F                                                           [ 77%]
test_two.py ....                                                         [100%]==================================================== FAILURES ===========================================================
____________________________________________________ test_add4 __________________________________________________________@pytest.mark.aaaadef test_add4():
>       assert add(17, 22) >= 50
E       assert 39 >= 50
E        +  where 39 = add(17, 22)test_asserts.py:36: AssertionError
__________________________________________________ test_not_true _______________________________________________________def test_not_true():
>       assert not is_prime(7)
E       assert not True
E        +  where True = is_prime(7)test_asserts.py:70: AssertionError
____________________________________________________ test_not_equal _____________________________________________________def test_not_equal():
>       assert (1, 2, 3) == (3, 2, 1)
E       assert (1, 2, 3) == (3, 2, 1)
E         At index 0 diff: 1 != 3
E         Use -v to get the full difftest_one.py:9: AssertionError
================================================ short test summary info ================================================
FAIL test_asserts.py::test_add4
FAIL test_asserts.py::test_not_true
FAIL test_one.py::test_not_equal
======================================== 3 failed, 15 passed in 0.10 seconds ============================================
pytest --help获取更多参数

在命令行输入pytest --help 然后执行结果如下,在打印出来的结果中我们能够看到pytest命令的使用方式usage: pytest [options] [file_or_dir] [file_or_dir] […]以及一系列的执行方式(options)及其描述。

C:\Users\Administrator>pytest --help
usage: pytest [options] [file_or_dir] [file_or_dir] [...]
positional arguments:file_or_dir
general:-k EXPRESSION         only run tests which match the given substringexpression. An expression is a python evaluatableexpression where all names are substring-matchedagainst test names and their parent classes. Example:-k 'test_method or test_other' matches all testfunctions and classes whose name contains'test_method' or 'test_other', while -k 'nottest_method' matches those that don't contain'test_method' in their names. Additionally keywordsare matched to classes and functions containing extranames in their 'extra_keyword_matches' set, as well asfunctions which have names assigned directly to them.-m MARKEXPR           only run tests matching given mark expression.example: -m 'mark1 and not mark2'.--markers             show markers (builtin, plugin and per-project ones).-x, --exitfirst       exit instantly on first error or failed test.--maxfail=num         exit after first num failures or errors.--strict              marks not registered in configuration file raiseerrors.-c file               load configuration from `file` instead of trying tolocate one of the implicit configuration files.--continue-on-collection-errorsForce test execution even if collection errors occur.--rootdir=ROOTDIR     Define root directory for tests. Can be relative path:'root_dir', './root_dir', 'root_dir/another_dir/';absolute path: '/home/user/root_dir'; path withvariables: '$HOME/root_dir'.--fixtures, --funcargsshow available fixtures, sorted by plugin appearance(fixtures with leading '_' are only shown with '-v')--fixtures-per-test   show fixtures per test--import-mode={prepend,append}prepend/append to sys.path when importing testmodules, default is to prepend.--pdb                 start the interactive Python debugger on errors orKeyboardInterrupt.--pdbcls=modulename:classnamestart a custom interactive Python debugger on errors.For example:--pdbcls=IPython.terminal.debugger:TerminalPdb--trace               Immediately break when running each test.--capture=method      per-test capturing method: one of fd|sys|no.-s                    shortcut for --capture=no.--runxfail            run tests even if they are marked xfail--lf, --last-failed   rerun only the tests that failed at the last run (orall if none failed)--ff, --failed-first  run all tests but run the last failures first. Thismay re-order tests and thus lead to repeated fixturesetup/teardown--nf, --new-first     run tests from new files first, then the rest of thetests sorted by file mtime--cache-show          show cache contents, don't perform collection or tests--cache-clear         remove all cache contents at start of test run.--lfnf={all,none}, --last-failed-no-failures={all,none}change the behavior when no test failed in the lastrun or no information about the last failures wasfound in the cache--sw, --stepwise      exit on test fail and continue from last failing testnext time--stepwise-skip       ignore the first failing test but stop on the nextfailing test--allure_severities=SEVERITIES_SETComma-separated list of severity names. Tests onlywith these severities will be run. Possible valuesare:blocker, critical, minor, normal, trivial.--allure_features=FEATURES_SETComma-separated list of feature names. Run tests thathave at least one of the specified feature labels.--allure_stories=STORIES_SETComma-separated list of story names. Run tests thathave at least one of the specified story labels.reporting:-v, --verbose         increase verbosity.-q, --quiet           decrease verbosity.--verbosity=VERBOSE   set verbosity-r chars              show extra test summary info as specified by chars(f)ailed, (E)error, (s)skipped, (x)failed, (X)passed,(p)passed, (P)passed with output, (a)all except pP.Warnings are displayed at all times except when--disable-warnings is set--disable-warnings, --disable-pytest-warningsdisable warnings summary-l, --showlocals      show locals in tracebacks (disabled by default).--tb=style            traceback print mode (auto/long/short/line/native/no).--show-capture={no,stdout,stderr,log,all}Controls how captured stdout/stderr/log is shown onfailed tests. Default is 'all'.--full-trace          don't cut any tracebacks (default is to cut).--color=color         color terminal output (yes/no/auto).--durations=N         show N slowest setup/test durations (N=0 for all).--pastebin=mode       send failed|all info to bpaste.net pastebin service.--junit-xml=path      create junit-xml style report file at given path.--junit-prefix=str    prepend prefix to classnames in junit-xml output--result-log=path     DEPRECATED path for machine-readable result log.collection:--collect-only        only collect tests, don't execute them.--pyargs              try to interpret all arguments as python packages.--ignore=path         ignore path during collection (multi-allowed).--deselect=nodeid_prefixdeselect item during collection (multi-allowed).--confcutdir=dir      only load conftest.py's relative to specified dir.--noconftest          Don't load any conftest.py files.--keep-duplicates     Keep duplicate tests.--collect-in-virtualenvDon't ignore tests in a local virtualenv directory--doctest-modules     run doctests in all .py modules--doctest-report={none,cdiff,ndiff,udiff,only_first_failure}choose another output format for diffs on doctestfailure--doctest-glob=pat    doctests file matching pattern, default: test*.txt--doctest-ignore-import-errorsignore doctest ImportErrors--doctest-continue-on-failurefor a given doctest, continue to run after the firstfailuretest session debugging and configuration:--basetemp=dir        base temporary directory for this test run.(warning:this directory is removed if it exists)--version             display pytest lib version and import information.-h, --help            show help message and configuration info-p name               early-load given plugin (multi-allowed). To avoidloading of plugins, use the `no:` prefix, e.g.`no:doctest`.--trace-config        trace considerations of conftest.py files.--debug               store internal tracing debug information in'pytestdebug.log'.-o OVERRIDE_INI, --override-ini=OVERRIDE_INIoverride ini option with "option=value" style, e.g.`-o xfail_strict=True -o cache_dir=cache`.--assert=MODE         Control assertion debugging tools. 'plain' performs noassertion debugging. 'rewrite' (the default) rewritesassert statements in test modules on import to provideassert expression information.--setup-only          only setup fixtures, do not execute tests.--setup-show          show setup of fixtures while executing tests.--setup-plan          show what fixtures and tests would be executed butdon't execute anything.pytest-warnings:-W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGSset which warnings to report, see -W option of pythonitself.logging:--no-print-logs       disable printing caught logs on failed tests.--log-level=LOG_LEVELlogging level used by the logging module--log-format=LOG_FORMATlog format as used by the logging module.--log-date-format=LOG_DATE_FORMATlog date format as used by the logging module.--log-cli-level=LOG_CLI_LEVELcli logging level.--log-cli-format=LOG_CLI_FORMATlog format as used by the logging module.--log-cli-date-format=LOG_CLI_DATE_FORMATlog date format as used by the logging module.--log-file=LOG_FILE   path to a file when logging will be written to.--log-file-level=LOG_FILE_LEVELlog file logging level.--log-file-format=LOG_FILE_FORMATlog format as used by the logging module.--log-file-date-format=LOG_FILE_DATE_FORMATlog date format as used by the logging module.reporting:--alluredir=DIR       Generate Allure report in the specified directory (maynot exist)[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found:markers (linelist)       markers for test functionsempty_parameter_set_mark (string) default marker for empty parametersetsnorecursedirs (args)     directory patterns to avoid for recursiontestpaths (args)         directories to search for tests when no files or direconsole_output_style (string) console output: classic or with additional progrusefixtures (args)       list of default fixtures to be used with this projectpython_files (args)      glob-style file patterns for Python test module discopython_classes (args)    prefixes or glob names for Python test class discoverpython_functions (args)  prefixes or glob names for Python test function and mxfail_strict (bool)      default for the strict parameter of xfail markers whejunit_suite_name (string) Test suite name for JUnit reportjunit_logging (string)   Write captured log messages to JUnit report: one of ndoctest_optionflags (args) option flags for doctestsdoctest_encoding (string) encoding used for doctest filescache_dir (string)       cache directory path.filterwarnings (linelist) Each line specifies a pattern for warnings.filterwarlog_print (bool)         default value for --no-print-logslog_level (string)       default value for --log-levellog_format (string)      default value for --log-formatlog_date_format (string) default value for --log-date-formatlog_cli (bool)           enable log display during test run (also known as "lilog_cli_level (string)   default value for --log-cli-levellog_cli_format (string)  default value for --log-cli-formatlog_cli_date_format (string) default value for --log-cli-date-formatlog_file (string)        default value for --log-filelog_file_level (string)  default value for --log-file-levellog_file_format (string) default value for --log-file-formatlog_file_date_format (string) default value for --log-file-date-formataddopts (args)           extra command line optionsminversion (string)      minimally required pytest versionenvironment variables:PYTEST_ADDOPTS           extra command line optionsPYTEST_PLUGINS           comma-separated plugins to load during startupPYTEST_DISABLE_PLUGIN_AUTOLOAD set to disable plugin auto-loadingPYTEST_DEBUG             set to enable debug tracing of pytest's internalsto see available markers type: pytest --markers
to see available fixtures type: pytest --fixtures
(shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option

理解Pytest的配置文件

Pytest里有哪些配置文件:

配置文件描述:无论选择使用哪种配置文件,它们的格式几乎是一样的
pytest.inipytest主配置文件,可以改变pytest默认行为
conftest.py本地插件库,其中的hook函数和fixture将作用于该文件所在目录及其子目录
__init__.py每个测试子目录都包含该文件时,在多个测试目录中可以出现同名的测试文件
tox.ini如果你使用tox工具,会用到tox.ini,它与pytest.ini类似,只不过是tox的配置文件,可以把pytest的配置写在tox.ini里,就无需同时使用pytest.ini和tox.ini了
setup.cfg它也采用ini文件格式,而且可以影响setup.py的行为,如果要发布一个python包,它的作用也很大,可以在setup.py文件里添加几行代码,使用python setup.py test 运行所有的pytest测试用例;如果打算发布python包,也可以使用setup.cfg文件存储pytest的配置信息
pytest.ini
;---
; Excerpted from "Python Testing with pytest",
; published by The Pragmatic Bookshelf.
; Copyrights apply to this code. It may not be used to create training material,
; courses, books, articles, and the like. Contact us if you are in doubt.
; We make no guarantees that this code is fit for any purpose.
; Visit http://www.pragmaticprogrammer.com/titles/bopytest for more book information.
;---
[pytest]
addopts = -rsxX -l --tb=short --strict
xfail_strict = true
;... more options ...

tox.ini

;---
; Excerpted from "Python Testing with pytest",
; published by The Pragmatic Bookshelf.
; Copyrights apply to this code. It may not be used to create training material,
; courses, books, articles, and the like. Contact us if you are in doubt.
; We make no guarantees that this code is fit for any purpose.
; Visit http://www.pragmaticprogrammer.com/titles/bopytest for more book information.
;---
;... tox specific stuff ...[pytest]
addopts = -rsxX -l --tb=short --strict
xfail_strict = true
;... more options ...

setup.cfg

;... packaging specific stuff ...[tool:pytest]
addopts = -rsxX -l --tb=short --strict
xfail_strict = true
;... more options ...

执行命令pytest --help能够看到所有设置选项


[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found:markers (linelist)       markers for test functionsempty_parameter_set_mark (string) default marker for empty parametersetsnorecursedirs (args)     directory patterns to avoid for recursiontestpaths (args)         directories to search for tests when no files or directories are given in the command line.usefixtures (args)       list of default fixtures to be used with this projectpython_files (args)      glob-style file patterns for Python test module discoverypython_classes (args)    prefixes or glob names for Python test class discoverypython_functions (args)  prefixes or glob names for Python test function and method discoverydisable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool) disable string escape non-ascii characters, might cause unwanted side effects(use at your ownconsole_output_style (string) console output: "classic", or with additional progress information ("progress" (percentage) | "count").xfail_strict (bool)      default for the strict parameter of xfail markers when not given explicitly (default: False)junit_suite_name (string) Test suite name for JUnit reportjunit_logging (string)   Write captured log messages to JUnit report: one of no|system-out|system-errjunit_duration_report (string) Duration time to report: one of total|calljunit_family (string)    Emit XML for schema: one of legacy|xunit1|xunit2doctest_optionflags (args) option flags for doctestsdoctest_encoding (string) encoding used for doctest filescache_dir (string)       cache directory path.filterwarnings (linelist) Each line specifies a pattern for warnings.filterwarnings. Processed after -W and --pythonwarnings.log_print (bool)         default value for --no-print-logslog_level (string)       default value for --log-levellog_format (string)      default value for --log-formatlog_date_format (string) default value for --log-date-formatlog_cli (bool)           enable log display during test run (also known as "live logging").log_cli_level (string)   default value for --log-cli-levellog_cli_format (string)  default value for --log-cli-formatlog_cli_date_format (string) default value for --log-cli-date-formatlog_file (string)        default value for --log-filelog_file_level (string)  default value for --log-file-levellog_file_format (string) default value for --log-file-formatlog_file_date_format (string) default value for --log-file-date-formataddopts (args)           extra command line optionsminversion (string)      minimally required pytest versionrsyncdirs (pathlist)     list of (relative) paths to be rsynced for remote distributed testing.rsyncignore (pathlist)   list of (relative) glob-style paths to be ignored for rsyncing.looponfailroots (pathlist) directories to check for changestimeout (string)         Timeout in seconds before dumping the stacks.  Default is 0 which
means no timeout.timeout_method (string)  Timeout mechanism to use.  'signal' uses SIGALRM if available,
'thread' uses a timer thread.  The default is to use 'signal' and fall
back to 'thtimeout_func_only (bool) When set to True, defers the timeout evaluation to only the test
function body, ignoring the time it takes when evaluating any fixtures
used in tpytester_example_dir (string) directory to take the pytester example files fromenvironment variables:PYTEST_ADDOPTS           extra command line optionsPYTEST_PLUGINS           comma-separated plugins to load during startupPYTEST_DISABLE_PLUGIN_AUTOLOAD set to disable plugin auto-loadingPYTEST_DEBUG             set to enable debug tracing of pytest's internals

插件可以添加ini文件选项

除了前边列出来的这些选项,利用插件和conftest.py文件还可以添加新的选项,而且新增的选项也可以使用pytest --help查看。

更改默认命令行选项

经过前边的文章,已经涉猎到很多pytest选项了,例如-v/–verbose可以输出详细信息,-l/–showlocals可以查看失败测试用例里堆栈中的局部变量,你可能经常用到这些选项,但又不想重复输入,此时就可以借助pytest.ini文件里的addopts设置

[pytest]
addopts = -rsxX -l --tb=short --strict
选项介绍
-rsxX表示pytest报告所有测试用例被跳过、预计失败、预计失败但实际通过的原因
-l表示pytest报告所有失败测试用例的对战中的局部变量
–tb=short表示简化堆栈回溯信息,只保留文件和行数
–strict选项表示禁止使用未在配置文件中注册的标记

注册标记来防范拼写错误

在pytest.ini中注册标记:

[pytest]
markers=smoke: Run the smoke test functions for tasks projectget:Run the test functions that test tasks.get()

标记注册好后,可以通过pytest --markers来查看

(venv) E:\Programs\Python\Python_Pytest\pytest-nice>pytest --markers
@pytest.mark.smoke: Run the smoke test functions for tasks project@pytest.mark.get:Run the test functions that test tasks.get()

这样当我们给addopts加上–strict时,没有注册的标记就不能再使用,因此也就尽可能减少拼写错误

指定pytest的最低版本号

minversion选项可以指定运行测试用例的pytest的最低版本,例如测试两个浮点数的值是否接近,我们会使用approx()函数,但这个功能直到pytest3.0才出现,为此我们可以在pytest.ini文件中添加

[pytest]
minversion = 3.0

指定pytest忽略某些目录

pytest执行搜索时,会递归遍历所有子目录,可以使用norecurse选项简化pytest的搜索工作。
norecurse的默认值是.* build dist CVS _darcs {arch} *.egg
如果让pytest忽略Tasks项目的src目录,则需要加入norecursedirs里

[pytest]
norecursedirs = .* venv src *.egg dist build

指定测试目录

testpaths只是pytest去哪里访问,它是一系列相对于根目录的路径,用于限定测试用例的搜索范围,只有在pytest未指定文件目录参数或者测试用例标识符时,该选项才会启动。

task_proj/
|------pytest.ini
|------src
|		|------tasks
|		|------api.py
|		|------......
|------test|------conftest.py|------func|		|------__init__py|		|------test_add.py|		|------......|------unit|------__init__.py|------test_task.py|------......

例如这样的机构目录,我们要指定test目录为pytest的执行路径

[pytest]
testpaths = test

然后只需要从tasks_proj开始运行pytest,pytest就会直接去找test路径。

更改测试搜索的规则

pytest的执行,是根据一定的规则搜索并运行测试的:

  • 从一个或多个目录开始查找
  • 在该目录和所有子目录下递归查找测试模块
  • 测试模块是指定文件名为test_.py和_test.py的文件
  • 在测试模块中查找以test_开头的函数名
  • 查找名字以Test开头的类,首先筛选掉包含__init__函数的类,再查找类中以Test_开头的类方法

接下来修改规则:
默认规则pytest寻找Test*开头的类,而这个类不能含有__init__()函数,可以使用python_classes来修改

 [pytest]python_classes = *Test Test* *Suite

像python_classes一样,python_files可以更改默认的测试搜索规则,而不是仅查找以test_开头的文件和_test结尾的文件

[pytest]
python_files = test_* *_test check_*

同样的可以修改搜索测试函数和方法的命名规则

[pytest]
python_functions = test_* check_*

禁用XPASS

设置xfail_strict = true将会使那些被标记为@pytest.mark.xfail但是实际通过的测试用例也会报告为失败。

避免文件名冲突

duplicate
|------dup_a
|		|------test_func.py
|		dup_b
|		|------test_func.py		

两个py文件中分别写入函数test_a()和test_b()

def test_a():pass
def test_b():pass

如此目录结构,两个同名文件,虽然文件内容不同,但他们还是会冲突,可以单独运行py文件,但在duplicate路径下执行就不行了,会报如下错误:


(venv) E:\Programs\Python\Python_Pytest\SourceCode\ch6\duplicate>pytest
================== test session starts ===================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
rootdir: E:\Programs\Python\Python_Pytest, inifile: pytest.ini
plugins: xdist-1.29.0, timeout-1.3.3, repeat-0.8.0, nice-0.1.0, instafail-0.4.1, forked-1.0.2, emoji-0.2.0, allure-pytest-2.6.3
collected 1 item / 1 errors                                                                                                                                                =========================================== ERRORS ============================
____________________ ERROR collecting SourceCode/ch6/duplicate/b/test_func.py __________________________
import file mismatch:
imported module 'test_func' has this __file__ attribute:E:\Programs\Python\Python_Pytest\SourceCode\ch6\duplicate\a\test_func.py
which is not the same as the test file we want to collect:E:\Programs\Python\Python_Pytest\SourceCode\ch6\duplicate\b\test_func.py
HINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================== 1 error in 0.32 seconds ==========================================

报错信息中也并没明显指出问题在哪,要解决这个问题,只需要在各个子目录里添加一个空的__init__.py文件即可,测试子目录添加__init__.py是个好习惯

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/web/44180.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【Pytorch】Conda环境下载慢换源/删源/恢复默认源

文章目录 背景临时换源永久换源打开conda配置condarc换源执行配置 命令行修改源添加源查看源 删源恢复默认源使用示范 背景 随着实验增多&#xff0c;需要分割创建环境的情况时有出现&#xff0c;在此情况下使用conda create --name xx python3.10 pytorch torchvision pytorc…

uni-app三部曲之二: 封装http请求

1.引言 前面一篇文章写了使用Pinia进行全局状态管理。 这篇文章主要介绍一下封装http请求&#xff0c;发送数据请求到服务端进行数据的获取。 感谢&#xff1a; 1.yudao-mall-uniapp: 芋道商城&#xff0c;基于 Vue Uniapp 实现&#xff0c;支持分销、拼团、砍价、秒杀、优…

电脑自动重启是什么原因呢?99%人都不知道的解决办法,直接打破循环

当你的电脑突然毫无预警地自动重启&#xff0c;不仅打断了工作流程&#xff0c;还可能导致未保存的数据丢失&#xff0c;这无疑是一件令人沮丧的事情。那么&#xff0c;电脑自动重启是什么原因呢&#xff1f;有什么方法可以解决呢&#xff1f;别担心&#xff0c;在大多数情况下…

docker-2

27.构建python应用镜像-dockerfile实践项目 1.基于官方的镜像&#xff0c;构建python代码运行环境 dockerfile 2.运行镜像&#xff0c;开启一个读写的容器空间&#xff08;定制操作&#xff0c;将代码丢进去&#xff0c;运行调试&#xff09; 3.提交这个变化的容器层数据&#…

cal命令

1、命令详解&#xff1a; cal&#xff08;全称&#xff1a;Calendar&#xff09;该命令用来显示当前日历或者指定日期的公历。 2、官方参数&#xff1a; -1, --one 仅显示当前月份&#xff08;默认&#xff09;-3, --three 显示上个月、当前月和下个月-s, --sunday…

谷粒商城P85发布商品时规格参数不显示问题

P85讲&#xff0c;发布商品&#xff0c;点击下一步之后&#xff0c;发现规格参数不显示 打开控制台发现报错forEach...错误 查了问题原因&#xff0c;发现返回的分组中个别组的关联属性(attrs)可能为null 所以这个时候&#xff0c;需要确保后端返回的attrs不能为null 方式1…

数据结构之顺序存储线性表实现详解与示例(C,C#,C++)

文章目录 一、顺序存储线性表的基本概念二、顺序存储线性表的实现1、数据结构定义2、初始化3、添加元素4、访问元素5、修改元素6、删除元素7、销毁 三、示例C语言示例C#语言示例C语言示例 顺序存储线性表是一种基本的数据结构&#xff0c;它将线性表的元素按照一定的顺序存放在…

每天一个数据分析题(四百二十五)- 单因素方差分析

关于下表&#xff0c;错误说法是&#xff08; &#xff09; A. 这是单因素方差分析的输出结果 B. 表中 F< F crit, 与 P-value 大于显著性水平是等价的 C. 表内组间均方差没有显著大于组内均方差 D. 由于组内SS数值显著大于组间SS&#xff0c;因此可以推断不同分类对于…

机器学习——决策树(笔记)

目录 一、认识决策树 1. 介绍 2. 决策树生成过程 二、sklearn中的决策树 1. tree.DecisionTreeClassifier&#xff08;分类树&#xff09; &#xff08;1&#xff09;模型基本参数 &#xff08;2&#xff09;模型属性 &#xff08;3&#xff09;接口 2. tree.Decision…

最新开源免费数字人工具

使用步骤更是简单到不行&#xff1a; 1. 输入图片&#xff1a;选择你想要生成动态视频的肖像图片。 2. 输入音频&#xff1a;提供与图片匹配的音频文件&#xff0c;EchoMimic会根据音频内容驱动肖像的动态效果。 3. 设置参数&#xff1a;一般保持默认设置即可&#xff0c;当然&…

《梦醒蝶飞:释放Excel函数与公式的力量》11.4 ISERROR函数

第11章&#xff1a;信息函数 第四节 11.4 ISERROR函数 11.4.1 简介 ISERROR函数是Excel中的一个信息函数&#xff0c;用于检查指定单元格或表达式是否产生错误。如果单元格或表达式产生任何类型的错误&#xff08;如N/A、VALUE!、REF!等&#xff09;&#xff0c;则返回TRUE&…

全开源TikTok跨境商城源码/TikTok内嵌商城+搭建教程/前端uniapp+后端

多语言跨境电商外贸商城 TikTok内嵌商城&#xff0c;商家入驻一键铺货一键提货 全开源完美运营 海外版抖音TikTok商城系统源码&#xff0c;TikToK内嵌商城&#xff0c;跨境商城系统源码 接在tiktok里面的商城。tiktok内嵌&#xff0c;也可单独分开出来当独立站运营 二十一种…

leetcode165.解密数字

题目表述&#xff1a; 这道题目和斐波那契数列以及跳台阶问题十分相似。 斐波那契数列&#xff1a;0、1、1、2、3、5, 8、13、21、34 …… leetcode跳台阶问题&#xff1a;1、1、2、3、5, 8、13、21、34....... 这类题目的特点都是第N项的结果等于前两项的和。 但是解密数…

【深度学习】图形模型基础(7):机器学习优化中的方差减少方法(1)

摘要 随机优化是机器学习中至关重要的组成部分&#xff0c;其核心是随机梯度下降算法&#xff08;SGD&#xff09;&#xff0c;这种方法自60多年前首次提出以来一直被广泛使用。近八年来&#xff0c;我们见证了一个激动人心的新进展&#xff1a;随机优化方法的方差降低技术。这…

车载测试资料学习和CANoe工具实操车载项目(每日直播)

每日直播时间&#xff1a;&#xff08;直播方式&#xff1a;腾讯会议&#xff09; 周一到周五&#xff1a;20&#xff1a;00-23&#xff1a;00 周六与周日&#xff1a;9&#xff1a;00-17&#xff1a;00 向进腾讯会议学习的&#xff0c;可以关注我并后台留言 直播内容&#xff…

Simscape物理建模步骤

为了介绍构建和仿真物理模型的步骤&#xff0c;这里以simulink自带示例模型Mass-Spring-Damper with Controller为例&#xff0c;下图为建立好的模型。 详细物理建模和仿真分析步骤如下&#xff1a; 步骤 1&#xff1a;使用 ssc_new 创建新模型 使用 ssc_new 是开始构建 Sims…

李彦宏所说的卷应用到底是什么?

李彦宏在2024世界人工智能大会上的发言强调了一个重要的观点&#xff0c;那就是在AI时代&#xff0c;技术的应用比技术本身更为关键。他所提出的“卷应用”而非“卷模型”&#xff0c;实际上是在呼吁业界关注AI技术的实际落地和价值创造&#xff0c;而不是单纯地在模型精度或规…

案例|LabVIEW连接S7-1200PLC

附带&#xff1a; 写了好的参考文章&#xff1a; 通讯测试工具和博图仿真机的连接教程【内含图文完整过程软件使用】 解决博图V15 V16 V17 V18等高版本和低版本在同款PLC上不兼容的问题 目录 前言一、准备条件二、步骤1. HslCommunicationDemo问题1&#xff1a;连接失败?问题…

Lingo学习(二)——线性规划基础、矩阵工厂

一、线性规划基础 &#xff08;一&#xff09;方法 ① 一个线性规划中只含一个目标函数。(两个以上是多目标线性规划,Lingo无法直接解) ② 求目标函数的最大值或最小值分别用max …或min …来表示。 ③ 以!开头,以;结束的语句是注释语句; ④ 线性规划和非线性规划的本质…

01、Kerberos安全认证之原理及搭建命令使用学习笔记

文章目录 前言一、Kerberos原理1.1、数据安全防护&#xff08;kerberos所属的层次&#xff09;1.2、Kerberos介绍1.3、Kerberos名词介绍1.4、Kerberos术语1.5、Kerberos认证流程1.5.1、Kerberos流程图1.5.2、第一次通信&#xff1a;客户端与AS1.5.3、第二次通信&#xff1a;客户…