Skip to content

feat: add Statistics & Anti-game chapters to release notes#305

Open
miroslavpojer wants to merge 2 commits intomasterfrom
feature/184-New-service-check---skip-label-stats
Open

feat: add Statistics & Anti-game chapters to release notes#305
miroslavpojer wants to merge 2 commits intomasterfrom
feature/184-New-service-check---skip-label-stats

Conversation

@miroslavpojer
Copy link
Copy Markdown
Collaborator

@miroslavpojer miroslavpojer commented Apr 15, 2026

Overview

Introduces skip-label usage statistics as a new optional section in generated release notes.

  • Added show-stats-chapters action input to toggle visibility of statistics chapters
  • Implemented StatsChapters class to collect and render per-author and per-label counts of skipped records (PRs and issues), broken down by PullRequestRecord and IssueRecord
  • Updated ActionInputs, action.yml, and configuration documentation to reflect the new input
  • Updated existing builder tests to disable the new chapter via an autouse fixture in a scoped conftest.py, preventing unintended output changes
  • Added full unit test coverage for StatsChapters (32 tests) with shared make_pr / make_issue factory fixtures extracted to tests/unit/release_notes_generator/chapters/conftest.py

Release Notes

  • Added statistics chapters to release notes: per-author and per-label skip counts for PRs and issues, controlled by the new show-stats-chapters input

Related

Closes #184

Summary by CodeRabbit

  • New Features

    • Added Statistics & Anti-game chapters displaying skip-label usage statistics aggregated by PR author, issue author/assignee, type, and label
    • New configuration option to control statistics chapter visibility (enabled by default)
  • Documentation

    • Added feature documentation and configuration reference for statistics chapters

- Introduced a new feature to surface skip-label usage statistics in release notes.
- Added `show-stats-chapters` input to toggle the display of statistics chapters.
- Implemented `StatsChapters` class to collect and render statistics on skipped records.
- Updated action inputs and configuration documentation to reflect new feature.
- Added tests for the new statistics functionality and ensured existing tests are updated accordingly.
- Created shared fixtures in conftest files for better test management.
@miroslavpojer miroslavpojer self-assigned this Apr 15, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 15, 2026

Warning

Rate limit exceeded

@miroslavpojer has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 41 minutes and 27 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 41 minutes and 27 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 661e78b2-fca2-4226-ad86-4dde92d020f8

📥 Commits

Reviewing files that changed from the base of the PR and between c8f62d9 and 7d85a89.

📒 Files selected for processing (1)
  • .github/workflows/test.yml

Walkthrough

This PR introduces a new "Statistics & Anti-game Chapters" feature that tracks and displays skip-label usage statistics aggregated by PR author, issue author/assignee, issue label, and PR label. The feature is controlled via a new show-stats-chapters action input (default true) and includes backend implementation, comprehensive tests, documentation, and configuration updates.

Changes

Cohort / File(s) Summary
Action Input & Configuration
action.yml, release_notes_generator/action_inputs.py, release_notes_generator/utils/constants.py, docs/configuration_reference.md
Added new action input show-stats-chapters (boolean, default true) and wired it through ActionInputs validation, getter methods, and documentation. Added constants for the new feature.
Statistics Chapter Implementation
release_notes_generator/chapters/stats_chapters.py, release_notes_generator/builder/builder.py
Implemented StatsChapters class that aggregates skip-label statistics across records by author, assignee, and label, rendering markdown tables sorted by skip count. Integrated conditional rendering into builder's content assembly logic.
Documentation
README.md, docs/features/stats_chapters.md, docs/features/skip_labels.md
Added comprehensive feature documentation describing stats chapters behavior, rendering logic, configuration controls, and examples. Updated skip labels documentation to reference the new feature.
Test Fixtures
tests/unit/release_notes_generator/chapters/conftest.py, tests/unit/conftest.py
Added pytest fixture factories for constructing PullRequestRecord and IssueRecord test instances with consistent mock shapes, and established conftest structure for unit tests.
Feature Tests
tests/unit/release_notes_generator/chapters/test_stats_chapters.py, tests/unit/release_notes_generator/test_action_inputs.py
Added 348 lines of comprehensive tests for StatsChapters covering statistics aggregation, sorting, filtering, edge cases (missing authors/labels/assignees), and subsection rendering. Added action input validation tests.
Test Infrastructure Patch
tests/unit/release_notes_generator/builder/conftest.py
Added autouse fixture patching get_show_stats_chapters to return False for all existing builder tests, preventing output variations from the new feature.
Repository Guidance
.github/copilot-instructions.md
Added guidance on pytest conftest file structure and fixture scoping constraints.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • tmikula-dev
  • Zejnilovic

Poem

🐰 A stats chapter hops in view,
Skip labels counted, tried and true,
Authors sorted, labels too,
Anti-games now crystal clear—
Rabbit's logic brings good cheer! ✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed Title clearly and concisely summarizes the main feature addition: Statistics & Anti-game chapters to release notes, matching the primary changeset.
Description check ✅ Passed Description follows the template structure with Overview, Release Notes, and Related sections, providing comprehensive details about the changes.
Linked Issues check ✅ Passed Pull request successfully implements all coding requirements from issue #184: StatsChapters class collects per-author/per-label skip counts for both PRs and issues, with proper handling of assignees and missing metadata.
Out of Scope Changes check ✅ Passed All changes align with issue #184 requirements: configuration inputs, documentation, statistics implementation, and test fixtures. No unrelated changes detected.
Docstring Coverage ✅ Passed Docstring coverage is 86.54% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/184-New-service-check---skip-label-stats

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
tests/unit/release_notes_generator/test_action_inputs.py (1)

401-410: Default-path test should assert the “input absent” fallback, not only the “true” value path.

Line 401 currently verifies a truthy return, but it does not prove that missing input uses the "true" default.

✅ Suggested test adjustment
 def test_get_show_stats_chapters_default_true(mocker):
     """show-stats-chapters defaults to True when the input is absent (default 'true')."""
-    mocker.patch("release_notes_generator.action_inputs.get_action_input", return_value="true")
+    mock_get = mocker.patch(
+        "release_notes_generator.action_inputs.get_action_input",
+        side_effect=lambda _name, default="": default,
+    )
     assert ActionInputs.get_show_stats_chapters() is True
+    mock_get.assert_called_once_with("show-stats-chapters", "true")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/release_notes_generator/test_action_inputs.py` around lines 401 -
410, Update test_get_show_stats_chapters_default_true to simulate a missing
input rather than returning the string "true": mock
release_notes_generator.action_inputs.get_action_input to raise the same
exception used when an input is absent (e.g., KeyError) or return None depending
on the actual get_action_input behavior, then call
ActionInputs.get_show_stats_chapters() and assert it returns True; this ensures
the test verifies the "input absent" fallback path for
ActionInputs.get_show_stats_chapters instead of merely the explicit "true"
string path.
release_notes_generator/chapters/stats_chapters.py (1)

25-25: Inject skip labels into StatsChapters instead of reading ActionInputs here.

StatsChapters is pure aggregation/rendering logic, but it currently pulls runtime inputs itself. This couples it to env/input boundaries and complicates reuse/testing.

♻️ Proposed refactor
@@
-from release_notes_generator.action_inputs import ActionInputs
@@
-    def __init__(self, print_empty_chapters: bool = True):
+    def __init__(self, print_empty_chapters: bool = True, skip_labels: set[str] | None = None):
         self.print_empty_chapters = print_empty_chapters
+        self._skip_labels = skip_labels or set()
@@
-        skip_labels = set(ActionInputs.get_skip_release_notes_labels())
@@
-            non_skip_labels = [lbl for lbl in record.labels if lbl not in skip_labels]
+            non_skip_labels = [lbl for lbl in record.labels if lbl not in self._skip_labels]

Then pass skip_labels=set(ActionInputs.get_skip_release_notes_labels()) at construction from release_notes_generator/builder/builder.py.

As per coding guidelines, **/*.py: Keep pure logic free of environment access where practical, and prefer routing env/I/O through a single boundary module.

Also applies to: 62-77

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@release_notes_generator/chapters/stats_chapters.py` at line 25, StatsChapters
is pulling runtime inputs directly
(ActionInputs.get_skip_release_notes_labels()) which couples pure
aggregation/rendering logic to environment access; modify StatsChapters so it
accepts skip_labels (e.g., a parameter on its __init__ or builder method) and
remove any direct calls to ActionInputs inside
release_notes_generator/chapters/stats_chapters.py (including the usages around
the current block ~lines 62-77), then update
release_notes_generator/builder/builder.py to call
ActionInputs.get_skip_release_notes_labels() once and pass set(...) into the
StatsChapters constructor; ensure all internal references in StatsChapters use
the provided skip_labels instance variable.
tests/unit/release_notes_generator/chapters/test_stats_chapters.py (1)

24-348: Add MockerFixture annotations for mocker in this test module.

The new tests are solid functionally, but mocker params should be typed consistently with the repository test typing rule.

✍️ Minimal pattern to apply
+from pytest_mock import MockerFixture
@@
-def _make_stats(mocker, *, skip_labels=None, print_empty=True):
+def _make_stats(
+    mocker: MockerFixture, *, skip_labels: list[str] | None = None, print_empty: bool = True
+) -> StatsChapters:
@@
-def test_switch_on_no_skipped(mocker, make_pr):
+def test_switch_on_no_skipped(mocker: MockerFixture, make_pr):

As per coding guidelines, **/{conftest,test_*}.py: Annotate pytest fixture parameters with MockerFixture (from pytest_mock).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/release_notes_generator/chapters/test_stats_chapters.py` around
lines 24 - 348, Add type annotations for the pytest-mock fixture: import
MockerFixture from pytest_mock at top of the module and change every test and
helper signature that takes the mocker fixture to accept mocker: MockerFixture
(e.g., def _make_stats(mocker: MockerFixture, *, skip_labels=None,
print_empty=True): and all tests like def test_switch_on_no_skipped(mocker:
MockerFixture, make_pr):). Ensure the import is present and update all
occurrences of the unannotated mocker parameter across the tests in this module.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@release_notes_generator/action_inputs.py`:
- Around line 428-434: get_show_stats_chapters currently coerces any non-"true"
string to False, masking malformed inputs like "treu" and preventing downstream
validation in release_notes_generator.action_inputs; update
get_show_stats_chapters to perform strict parsing and validation at the input
layer: read the raw value via get_action_input(SHOW_STATS_CHAPTERS, "true"),
validate it is exactly "true" or "false" (case-insensitive), and return the
corresponding bool; if the value is invalid, raise a ValueError or call the
existing input validation helper so the error surfaces early (centralize
parsing/validation here rather than relying on later checks).

In `@tests/unit/release_notes_generator/builder/conftest.py`:
- Around line 19-25: The fixture _disable_stats_chapters needs pytest-mock
typing: add an import "from pytest_mock import MockerFixture" and annotate the
fixture signature to accept mocker: MockerFixture and return None (def
_disable_stats_chapters(mocker: MockerFixture) -> None:), keeping the existing
body that patches ActionInputs.get_show_stats_chapters; update only the
signature and add the import so conftest follows the fixture typing rule.

---

Nitpick comments:
In `@release_notes_generator/chapters/stats_chapters.py`:
- Line 25: StatsChapters is pulling runtime inputs directly
(ActionInputs.get_skip_release_notes_labels()) which couples pure
aggregation/rendering logic to environment access; modify StatsChapters so it
accepts skip_labels (e.g., a parameter on its __init__ or builder method) and
remove any direct calls to ActionInputs inside
release_notes_generator/chapters/stats_chapters.py (including the usages around
the current block ~lines 62-77), then update
release_notes_generator/builder/builder.py to call
ActionInputs.get_skip_release_notes_labels() once and pass set(...) into the
StatsChapters constructor; ensure all internal references in StatsChapters use
the provided skip_labels instance variable.

In `@tests/unit/release_notes_generator/chapters/test_stats_chapters.py`:
- Around line 24-348: Add type annotations for the pytest-mock fixture: import
MockerFixture from pytest_mock at top of the module and change every test and
helper signature that takes the mocker fixture to accept mocker: MockerFixture
(e.g., def _make_stats(mocker: MockerFixture, *, skip_labels=None,
print_empty=True): and all tests like def test_switch_on_no_skipped(mocker:
MockerFixture, make_pr):). Ensure the import is present and update all
occurrences of the unannotated mocker parameter across the tests in this module.

In `@tests/unit/release_notes_generator/test_action_inputs.py`:
- Around line 401-410: Update test_get_show_stats_chapters_default_true to
simulate a missing input rather than returning the string "true": mock
release_notes_generator.action_inputs.get_action_input to raise the same
exception used when an input is absent (e.g., KeyError) or return None depending
on the actual get_action_input behavior, then call
ActionInputs.get_show_stats_chapters() and assert it returns True; this ensures
the test verifies the "input absent" fallback path for
ActionInputs.get_show_stats_chapters instead of merely the explicit "true"
string path.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 831c50a3-a544-4286-ad12-e90970570516

📥 Commits

Reviewing files that changed from the base of the PR and between 6c7497c and c8f62d9.

📒 Files selected for processing (14)
  • .github/copilot-instructions.md
  • README.md
  • action.yml
  • docs/configuration_reference.md
  • docs/features/skip_labels.md
  • docs/features/stats_chapters.md
  • release_notes_generator/action_inputs.py
  • release_notes_generator/builder/builder.py
  • release_notes_generator/chapters/stats_chapters.py
  • release_notes_generator/utils/constants.py
  • tests/unit/release_notes_generator/builder/conftest.py
  • tests/unit/release_notes_generator/chapters/conftest.py
  • tests/unit/release_notes_generator/chapters/test_stats_chapters.py
  • tests/unit/release_notes_generator/test_action_inputs.py

Comment on lines +428 to +434
@staticmethod
def get_show_stats_chapters() -> bool:
"""
Get the show stats chapters parameter value from the action inputs.
"""
return get_action_input(SHOW_STATS_CHAPTERS, "true").lower() == "true"

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Invalid show-stats-chapters values are silently accepted as False.

get_show_stats_chapters() coerces any non-"true" value to False, so the validation at Line 599-600 can never fail for malformed inputs (e.g., "treu"), causing silent behavior changes.

💡 Proposed fix (strict bool parsing with validation)
@@
     `@staticmethod`
     def get_show_stats_chapters() -> bool:
         """
         Get the show stats chapters parameter value from the action inputs.
         """
-        return get_action_input(SHOW_STATS_CHAPTERS, "true").lower() == "true"
+        raw = (get_action_input(SHOW_STATS_CHAPTERS, "true") or "").strip().lower()
+        return raw == "true"
@@
-        show_stats_chapters = ActionInputs.get_show_stats_chapters()
-        ActionInputs.validate_input(show_stats_chapters, bool, "Show stats chapters must be a boolean.", errors)
+        show_stats_chapters_raw = (get_action_input(SHOW_STATS_CHAPTERS, "true") or "").strip().lower()
+        if show_stats_chapters_raw not in ("true", "false"):
+            errors.append("Show stats chapters must be 'true' or 'false'.")
+        show_stats_chapters = show_stats_chapters_raw == "true"

As per coding guidelines, **/*.py: Centralize parsing and validation in one input layer.

Also applies to: 599-600

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@release_notes_generator/action_inputs.py` around lines 428 - 434,
get_show_stats_chapters currently coerces any non-"true" string to False,
masking malformed inputs like "treu" and preventing downstream validation in
release_notes_generator.action_inputs; update get_show_stats_chapters to perform
strict parsing and validation at the input layer: read the raw value via
get_action_input(SHOW_STATS_CHAPTERS, "true"), validate it is exactly "true" or
"false" (case-insensitive), and return the corresponding bool; if the value is
invalid, raise a ValueError or call the existing input validation helper so the
error surfaces early (centralize parsing/validation here rather than relying on
later checks).

Comment on lines +19 to +25
@pytest.fixture(autouse=True)
def _disable_stats_chapters(mocker):
"""Disable stats chapters for all existing builder tests to avoid output changes."""
mocker.patch(
"release_notes_generator.builder.builder.ActionInputs.get_show_stats_chapters",
return_value=False,
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add fixture type annotations for conftest compliance.

Line 20 should type the mocker parameter and return type to match the fixture typing rule.

✍️ Suggested patch
 import pytest
+from pytest_mock import MockerFixture
 
 
 `@pytest.fixture`(autouse=True)
-def _disable_stats_chapters(mocker):
+def _disable_stats_chapters(mocker: MockerFixture) -> None:
     """Disable stats chapters for all existing builder tests to avoid output changes."""
     mocker.patch(
         "release_notes_generator.builder.builder.ActionInputs.get_show_stats_chapters",
         return_value=False,
     )

As per coding guidelines: "**/{conftest,test_*}.py: Annotate pytest fixture parameters with MockerFixture (from pytest_mock) ..."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/release_notes_generator/builder/conftest.py` around lines 19 - 25,
The fixture _disable_stats_chapters needs pytest-mock typing: add an import
"from pytest_mock import MockerFixture" and annotate the fixture signature to
accept mocker: MockerFixture and return None (def
_disable_stats_chapters(mocker: MockerFixture) -> None:), keeping the existing
body that patches ActionInputs.get_show_stats_chapters; update only the
signature and add the import so conftest follows the fixture typing rule.

miroslavpojer added a commit that referenced this pull request Apr 16, 2026
#305)

- Added integration test specification in SPEC.md to outline the scope and structure of tests.
- Created conftest.py for shared fixtures and helpers for offline integration tests.
- Implemented test_full_pipeline_snapshot.py to validate the complete release notes generation pipeline.
- Added test_bulk_sub_issue_collector.py for live testing of BulkSubIssueCollector with real GitHub API.
- Created fixtures/test_full_pipeline_snapshot.md to store expected output for snapshot tests.
- Added initial structure for live tests in the live directory.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

New service check - skip label stats

1 participant