cpan.org mail relay is really slow

r/perl

Published by /u/popefelix on Monday 11 August 2025 22:40

Normally I wouldn't care - I don't need my CPAN emails to come instantaneously. Most of them are spam. But. I'm trans, and I created my Gravatar for my CPAN email address before I transitioned, so every time I look at my CPAN author page I see a guy looking back at me. I try to log into Gravatar to fix this, they send me a login code, but it expires before the code ever hits my gmail. Is there any way I can fix this? I would really like to put up a recent picture of myself on my CPAN author page.

submitted by /u/popefelix
[link] [comments]

sv_grow - new allocs are at least a sensible size

Perl commits on GitHub

Published by richardleach on Monday 11 August 2025 22:07

sv_grow - new allocs are at least a sensible size

We don't believe that the size of new buffers allocated via Perl_sv_grow
should be rounded up, but with the new `expected_size` macro we can
ensure that `newlen` is not smaller than the minimum possible allocation
and is at least rounded up to the nearest PTRSIZE.

On Ubuntu, I open a file for append. The file does not have a newline character at the end. My program writes lines using print($file "\n$var"); At the point where the appending started an unexpected blank line is created. It is as if perl is detecting that the file does not end with a newline, and adds one as it opens the file. There is definitely not a newline at the end of the existing file before the program runs. I cannot find any reference to this in the documentation. Is this what is supposed to happen?

Perl 🐪 Weekly #733 - Perl using AI

dev.to #perl

Published by Gabor Szabo on Monday 11 August 2025 07:03

Originally published at Perl Weekly 733

Hi there,

AS you know I teach Python and Rust (and I would teach Perl as well if clients were asking for it), and recently I started to wonder how to change my material and my assignment to have any value in the age of AI.

I tried several of the assignments I usually give to my students, pasted them in copilot and got a working solution with tests and documentation. The solutions were good, and way 'better' than what I teach. Better in that they already included input validation, exception, written with separation of concerns. Most of these things are relatively advanced. Well, at least you usually don't learn about them in the first few hours of learning.

The students can also ask the AI to explain each word and each symbol in the code. The question remains of course, if they will understand the language the AI uses or if they need a simpler explanation.

In any case it seems these assignments are now rather useless as the students - especially the ones that are only interested in the credit - will use AI to generate the solution and won't learn a thing. Besides copy-pasting the assignment and the solution.

If you have ideas of how and what should I teach to help them become productive, let me know in an email!

Enjoy your week!

--
Your editor: Gabor Szabo.

Articles

I started to develop SPVM::Mojolicious

Caching in Perl

Redis / Valkey

A Rusty Web? An Excursion of a Perl Guy into Rust Land

AWS Lambda

AWS Lambda is a serverless compute service that allows you to run code without having to manage servers.

Serialisation in Perl

Storable vs Sereal

GPT5 , Perl and PDL

Seems #ChatGPT5 uses Perl along with PDL for the agentic coding step - What does that mean?

GPT5 and Perl

Apparently GPT5 are trained in datasets that overrepresent Perl. Heh?

Tiobe index for August 2025 puts Perl in the top 10 above PHP

For year the Perl community was say how TIOBE is flawed. Now what?

CVE-2025-40909

The Perl Foundation

SUSE Donates USD 11,500 to The Perl and Raku Foundation

Grants

PEVANS Core Perl 5: Grant Report for June/July 2025

Perl

This week in PSC (199) | 2025-08-07

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.

The Weekly Challenge - 334

Welcome to a new week with a couple of fun tasks "Range Sum" and "Nearest Valid Point". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 333

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Straight Line" and "Duplicate Zeros" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

TWC333

The solutions demonstrate deep understanding of both Perl programming and algorithmic thinking, making them excellent reference implementations for these challenges.

Straight Zeros

A well-written, technically sound review that showcases Raku’s capabilities for solving algorithmic problems elegantly. Great for learners interested in Raku or prefix-sum techniques!

Odd last date letters, binary word list buddy

The implementations balance conciseness with readability while leveraging Perl's strengths in text processing. Each solution could be extended with more robust input validation and edge case handling for production use.

Back in the Saddle Again

Both solutions are clean, well-structured Perl code using modern features like postderef, includes good test cases covering various scenarios.

Perl Weekly Challenge: Week 333

These solutions demonstrate deep understanding of both problem domains and Perl programming, making them excellent reference implementations. The mathematical approach to Task 1 is particularly noteworthy for its robustness, while the concise functional solution to Task 2 showcases Perl's expressiveness.

Shift and Duplicate

These implementations serve as excellent examples of how Perl can be used for both mathematical computations and array transformations with equal effectiveness.

streaming numbers

The solutions provide excellent examples of how to solve these problems idiomatically in each language while maintaining the same underlying logic. They showcase Luca's proficiency in multiple programming languages and environments.

Perl Weekly Challenge 333

These solutions are efficient (O(n) time complexity) and handle all edge cases properly. The Perl solutions leverage PDL's powerful matrix operations for Task 1. Both implementations for Task 2 follow a straightforward approach to duplicate zeros while maintaining the original array length.

Double O Straight (Not Stirred)

These implementations serve as excellent examples of how to approach algorithmic problems in Perl, balancing mathematical rigor with language idioms. The dual solutions for Task 2 particularly highlight Perl's flexibility in allowing different programming styles for the same problem.

Zero is Not the End of the Line

The solutions provide excellent examples of how to approach these challenges in different languages while maintaining the same core algorithms. They showcase both the similarities and differences between languages when solving identical problems.

Straight zeroes

The implementations provide excellent examples of how to approach these challenges in Perl, balancing correctness, clarity, and practicality. The detailed output formatting in Task 1 is particularly noteworthy for making the solutions more informative and useful.

The Weekly Challenge #333

Both solutions handle the edge cases mentioned in the problem descriptions and provide the expected outputs for all given examples. The solutions are efficient with time complexity O(n) for both tasks, where n is the number of points or array elements.

Duplicate Straights Are a Line of Zeroes

The solutions match all given test cases and handle edge conditions properly. The Python implementations are concise while remaining readable.

Duplicate Lines

These solutions match all the provided test cases and handle edge conditions properly. The Python solutions use list comprehensions and slicing for conciseness, while the Perl solutions follow similar logic with Perl's array operations.

Weekly collections

NICEPERL's lists

Great CPAN modules released last week;
MetaCPAN weekly report.

Events

Paris.pm monthly meeting

August 13, 2025

Paris.pm monthly meeting

September 10, 2025

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

Maintaining Perl (Tony Cook) July 2025

Perl Foundation News

Published by alh on Monday 11 August 2025 05:21


Tony writes:

``` [Hours] [Activity] 2025/07/01 Tuesday 0.93 #23390 review behaviour, testing, review associated PR 23392 and approve 0.65 #23326 review discussion, add fix to 5.38. 5.40 votes files, mark closable with comment 0.47 #23384 review discussion, testing and comment

1.43 #23385 review and comments

3.48

2025/07/02 Wednesday 0.90 #23385 more review, comments

0.80 #23389 review

1.70

2025/07/03 Thursday 1.32 #23150 review, review discussion, comments 0.08 #23385 brief follow-up 0.43 #23384 review discussion and decide not follow-up 0.15 #22120 follow-up 1.15 #23340 read through discussion, think about solutions

1.52 #23340 research and long-ish comment

4.65

2025/07/07 Monday 0.23 github notifications 0.65 #23358 review, research

0.88 #23358 comments

1.76

2025/07/09 Wednesday 2.17 #23326 follow-up, work on a fix 0.10 #1674 rebase and re-push PR 23219 0.03 #1674 check CI and apply to blead 0.62 #23326 fix non-threaded, testing and re-push 1.02 #23375 review, testing and approve with comment

0.45 #23370 review. research and approve

4.39

2025/07/10 Thursday 2.17 #23226 testing and follow-up, work on a more extensive test, testing, push for CI/smoking 0.43 #23416 review and comment 0.88 #23419 review and comment 0.57 #23326 look into CI failures (alas Windows), fixes and

push

4.05

2025/07/11 Friday

0.13 #23226 make PR

0.13

2025/07/14 Monday 0.22 #23349 review updates and approve 1.08 #23433 review and comment, work on PR for SLU to re- introduce apos in names upstream PR#141 1.18 #23226 look into openbsd test failures. debugging

0.57 #23226 debugging

3.05

2025/07/15 Tuesday 0.37 #23433 follow-up on SLU PR#141 1.80 #23226 debugging

0.82 #23226 testing, debugging

2.99

2025/07/16 Wednesday 1.80 #23226 follow-up, testing, push with a workaround, work on minor clean up, comments

0.37 #23226 more follow-up, minor fix, push for testing

2.17

2025/07/17 Thursday 1.28 #23429 review, comments

0.13 #23413 review and approve

1.41

2025/07/21 Monday 0.60 #23301 review updates and comments 0.12 #23312 followup 0.88 #23429 review, testing, research and comment 1.27 #23202 review

2.02 #23202 more review, comments

4.89

2025/07/22 Tuesday 0.78 check coverity scan report, reasonable errors though none apply in the circumstances reported 0.40 #23301 testing, comment 0.20 #23460 review and comment

1.08 #23447 review, try to break it

2.46

2025/07/23 Wednesday 0.17 #23301 review updates and approve 0.08 #23460 comment, review and approve 0.35 #23461 review upstream ticket and the change, comment 0.40 #23447 manage to break it, comment 0.28 jkeenan’s pthread thread on p5p/#23306 testing 0.40 #23462 review, comments 0.08 #23392 re-check and apply to blead 0.47 #23464 review issue, reproduce, review code, test a fix and make PR 23465 0.45 #23178 re-check and apply to blead 0.55 #23414 review, comment 0.48 #23462 look into CI failure, review some more, comment

0.82 #23360 review, testing, comments

4.53

2025/07/24 Thursday 1.03 #22125 rebase, testing freebsd case suggested by Dave, comment, more testing 0.20 #23468 review, research and approve 0.10 #23467 review, research and approve 0.42 #23463 research, testing and comment

1.45 #23340 research, comment

3.20

2025/07/28 Monday 0.30 #23481 review and comment 1.23 #23367 review, testing and approve 0.23 #23462 review updates 0.17 #23479 review and approve 1.07 #23477 review, testing 0.52 #23477 more testing, approve 0.42 #23459 review. research and comment

0.58 #21877 sv_gets review

4.52

2025/07/29 Tuesday 0.60 #23459 testing and comment 0.73 #23323 research and comment 0.08 #23481 review updates and approve

0.32 #23488 review, research and comment

1.73

2025/07/30 Wednesday 0.20 #23433 link SLU ticket #141 follow-up 0.15 #23499 follow-up 0.27 #23489 review and comment 0.18 #23491 review and approve 0.15 #23494 review and approve 0.08 #23495 review and approve 0.08 #23496 review and approve 0.08 #23498 review and approve 0.15 #23501 review, research and comment 0.38 #23503 review test results, testing without the builtin math on Linux and comment 0.47 #23508 review, try to break it and approve 0.32 #23506 review, comment

1.30 #23483 review, research

3.81

2025/07/31 Thursday 0.08 #23501 review and approve 0.22 #23500 review and approve 0.22 #23499 review and apply to blead 0.90 #23509 review and approve 0.33 #23514 review and approve 0.38 #23513 review and comment 1.20 #22125 try to reproduce reported freebsd failure, manage

to reproduce, research, more testing and comment

3.33

Which I calculate is 58.25 hours.

Approximately 60 tickets were reviewed or worked on, and 4 patches were applied. ```

RECAP - The Weekly Challenge - 332

The Weekly Challenge

Published on Monday 11 August 2025 00:00

Thank you Team PWC for your continuous support and encouragement.

The Weekly Challenge - 334

The Weekly Challenge

Published on Monday 11 August 2025 00:00

Welcome to the Week #334 of The Weekly Challenge.

RECAP - The Weekly Challenge - 333

The Weekly Challenge

Published on Monday 11 August 2025 00:00

Thank you Team PWC for your continuous support and encouragement.

Why everybody codes in perl always

r/perl

Published by /u/ktown007 on Sunday 10 August 2025 22:39

I want to know whether the input contains any non-space as in \S. However, the input may contain ANSI VT escape sequences (Wikipedia) for text color and style, which do match \S (even the ESC code matches \S), and while they have an effect on the output (in the console, or terminal), they do not count as non-space for my purpose (which is to find out whether there is actual text other than whitespace).

I put together a good-enough RE for the ANSI VT escape sequences (\x1b\[[0-9;]{,9}m), but in the general case, and for Q&A purposes, it could be any “easier” dummy sequence such as A[a-z]+m that I would want to rule out as a match.

So how to proceed? What I found easiest is to first cleanse the input (note the /r modifier, which returns a cleansed copy and leaves the input unharmed):

$cleansed = $input =~ s/A[a-z]+m//gr;          # easy dummy

… or alternatively (for the real ANSI VT thing):

$cleansed = $input =~ s/\x1b\[[0-9;]{,9}m//gr; # ANSI VT

… and then match the cleansed copy against \S. And this works.

So this is a two-pass approach, and it is easily understood (not the least important aspect). But I'm wondering whether there is a smarter and yet still clear one-pass approach (that'll inevitably take my RE insight to the next level) ?

Tiobe index for August 2025 puts Perl in the top 10 above PHP

r/perl

Published by /u/scottchiefbaker on Sunday 10 August 2025 20:11

Revert "Perl_op_convert_list - only short circuit CONST OPs with an IsCOW SV"

This reverts commit 80d3e79d3833d7ebe9723958157b6fd354612a94.

That commit was always intended as a stopgap to be replaced by
b1f270f0cbf67fc18c949626d91f0f1856747fe5.

The history behind this was as follows:
* GH#22116 - a902d92 - short-circuited constant folding on CONST OPs,
as this should be unnecessary. However, Dave Mitchell noticed that it
had the inadvertent effect of disabling COW on SVs holding UTF8 string
literals (e.g. `"\x{100}abcd"`).
* b1f270f0cbf67fc18c949626d91f0f1856747fe5 always seemed like the
best fix, but given the apparent proximity to the 5.42 release date
that commit seemed to be too big a change.
* GH#23296 brought in the now-reverted commit as a stop gap that
retained some of the older, now-unnecessary behaviour.

How does interpolating a Perl constant into a string work?

Perl questions on StackOverflow

Published by Eugene Yarmash on Sunday 10 August 2025 17:43

I've just come across such usage of Perl constants:

use strict;
use warnings;
use constant PI => 4 * atan2(1, 1);

print("The value of PI is: ${\PI}\n");  # The value of PI is: 3.14159265358979

How does this syntax for interpolating a constant into a string work?

CVE-2025-40909

blogs.perl.org

Published by Mohammad Sajid Anwar on Sunday 10 August 2025 17:26


Reproduce the vulnerability CVE-2025-40909 in an isolated Docker container running Perl v5.34.0.

pp_pack NEXT_UNI_VAL: Replace utf8n_to_uvchr with utf8_to_uv_flags

This is part of the process of replacing all the _uvchr functions with
_uv functions, which are now preferred.  See perlapi.

Regex to match two consecutive dots but not three

Perl questions on StackOverflow

Published by ericgbirder on Sunday 10 August 2025 14:31

I'm looking for a Perl regex that matches two or more consecutive dots unless there are three dots.

These strings should match:

Yes.. Please go away.
I have the ball..
In this case....I vote yes.

These strings should not match:

You said this because...?
I dream...
Suppose...oh, wait.

I thought a negative lookahead assertion would work:

\.\.(?!\.)

but I still get three dots matching since it matches the last two dots of an ellipsis and the first character after the ellipsis.

Edit: To clarify, either of these two conditions should cause a match:

  1. Two consecutive dots surrounded by no dots.

  2. Four or more consecutive dots surrounded by no dots.

One or both ends of the "surrounded" dots could be start or end of line.

As I've stated it above, this seems to work:

[^\.]\.\.[^\.]|[^\.]\.{4,}[^\.]

But perhaps there's a better way?

Weekly Challenge: Duplicate Lines

dev.to #perl

Published by Simon Green on Sunday 10 August 2025 14:22

Weekly Challenge 333

Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. It's a great way for us all to practice some coding.

Challenge, My solutions

Task 1: Straight Line

Task

You are given a list of co-ordinates.

Write a script to find out if the given points make a straight line.

My solution

This is the exact opposite of the Boomerang challenge in week #293. As such, I copied and pasted my solution from that challenge, replacing the true and false values with false and true.

You can read about how I solved the Boomerang task here. For input from the command line, I take the integers and convert them to a points list (array in Perl) before calling the function.

Examples

$ ./ch-1.py 2 1 2 3 2 5
True

$ ./ch-1.py 1 4 3 4 10 4
True

$ ./ch-1.py 0 0 1 1 2 3
False

$ ./ch-1.py 1 1 1 1 1 1
True

$ ./ch-1.py 1000000 1000000 2000000 2000000 3000000 3000000
True

Task 2: Duplicate Zeros

Task

You are given an array of integers.

Write a script to duplicate each occurrence of zero, shifting the remaining elements to the right. The elements beyond the length of the original array are not written.

My solution

For this task, I start with an empty solution list (array in Perl). I then iterate through the list of ints. If the value is 0, I append 0 to the solutions list. I also append the original value.

Finally, I return the solution list truncated to the number of items in the original list.

def duplicate_zeros(ints: list) -> list:
    solution = []
    for i in ints:
        if i == 0:
            solution.append(0)
        solution.append(i)

    return solution[:len(ints)]

The Perl code follows the same logic.

sub main (@ints) {
    my @solution = ();
    for my $i (@ints) {
        if ($i == 0) {
            push @solution, 0;
        }
        push @solution, $i;
    }
    say join(', ', @solution[ 0 .. $#ints ]);
}

Examples

$ ./ch-2.py 1 0 2 3 0 4 5 0
[1, 0, 0, 2, 3, 0, 0, 4]

$ ./ch-2.py 1 2 3
[1, 2, 3]

$ ./ch-2.py 1 2 3 0
[1, 2, 3, 0]

$ ./ch-2.py 0 0 1 2
[0, 0, 0, 0]

$ ./ch-2.py 1 2 0 3 4
[1, 2, 0, 0, 3]

GPT5 , Perl and PDL

r/perl

Published by /u/ReplacementSlight413 on Sunday 10 August 2025 01:03

The Weekly Challenge - Guest Contributions

The Weekly Challenge

Published on Sunday 10 August 2025 00:00

As you know, The Weekly Challenge, primarily focus on Perl and Raku. During the Week #018, we received solutions to The Weekly Challenge - 018 by Orestis Zekai in Python. It was pleasant surprise to receive solutions in something other than Perl and Raku. Ever since regular team members also started contributing in other languages like Ada, APL, Awk, BASIC, Bash, Bc, Befunge-93, Bourne Shell, BQN, Brainfuck, C3, C, CESIL, Chef, COBOL, Coconut, C Shell, C++, Clojure, Crystal, D, Dart, Dc, Elixir, Elm, Emacs Lisp, Erlang, Excel VBA, F#, Factor, Fennel, Fish, Forth, Fortran, Gembase, Gleam, GNAT, Go, GP, Groovy, Haskell, Haxe, HTML, Hy, Idris, IO, J, Janet, Java, JavaScript, Julia, K, Kap, Korn Shell, Kotlin, Lisp, Logo, Lua, M4, Maxima, Miranda, Modula 3, MMIX, Mumps, Myrddin, Nelua, Nim, Nix, Node.js, Nuweb, Oberon, Octave, OCaml, Odin, Ook, Pascal, PHP, PicoLisp, Python, PostgreSQL, Postscript, PowerShell, Prolog, R, Racket, Rexx, Ring, Roc, Ruby, Rust, Scala, Scheme, Sed, Smalltalk, SQL, Standard ML, SVG, Swift, Tcl, TypeScript, Typst, Uiua, V, Visual BASIC, WebAssembly, Wolfram, XSLT, YaBasic and Zig.

utf8.c: Fix panic output function name

Perl commits on GitHub

Published by khwilliamson on Saturday 09 August 2025 20:37

utf8.c: Fix panic output function name

When this code was extracted from its earlier function into a new one,
the text of this panic message that names the failing function did not
get updated correspondingly.

GPT5 and Perl

r/perl

Published by /u/ReplacementSlight413 on Saturday 09 August 2025 20:21

GPT5 and Perl

Apparently GPT5 (and I assume all the ones prior to it) are trained in datasets that overrepresent Perl. This, along with the terse nature of the language, may explain why the Perl output of the chatbots is usually good.

https://bsky.app/profile/pp0196.bsky.social/post/3lvwkn3fcfk2y

submitted by /u/ReplacementSlight413
[link] [comments]

The DBI docs on prepare() state that it's behaviour differ for various drivers:

Drivers for engines without the concept of preparing a statement will typically just store the statement in the returned handle and process it when $sth->execute is called.

I have general query log enabled in MySQL and I don't see any SQL statements other than SELECTs or INSERTs that I pass to $sth->execute(). Does it mean that calling prepare() is a no-op for the DBI's MySQL driver and I can just use $dbh shortcuts (like $dbh->do() or $dbh->selectall_arrayref())? Or is there any other simple way to measure if adding the prepare step provides any benefits?

P.S. I mostly care about performance difference as the simpler $dbh shortcuts provide the same functionality for parameterization etc. Also, I'm asking specifically about MySQL, not an abstract DBI driver.

perldelta for new sv_vcatpvfn_flags malformed behavior

Perl commits on GitHub

Published by khwilliamson on Saturday 09 August 2025 18:02

perldelta for new sv_vcatpvfn_flags malformed behavior

(dlx) 12 great CPAN modules released last week

Niceperl

Published by prz on Saturday 09 August 2025 17:41

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::Music::ChordPro - A lyrics and chords formatting program
    • Version: v6.070.7 on 2025-08-05, with 389 votes
    • Previous CPAN version: 6.070 was 7 months, 12 days before
    • Author: JV
  2. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20250807.001 on 2025-08-07, with 23 votes
    • Author: BRIANDFOY
  3. DBD::mysql - A MySQL driver for the Perl5 Database Interface (DBI)
    • Version: 5.013 on 2025-08-03, with 64 votes
    • Author: DVEEDEN
  4. Google::Ads::GoogleAds::Client - Google Ads API Client Library for Perl
    • Version: v28.0.0 on 2025-08-06, with 20 votes
    • Previous CPAN version: v27.0.1 was 29 days before
    • Author: CHEVALIER
  5. Mac::PropertyList - work with Mac plists at a low level
    • Version: 1.605 on 2025-08-08, with 13 votes
    • Previous CPAN version: 1.604 was 15 days before
    • Author: BRIANDFOY
  6. Mail::DMARC - Perl implementation of DMARC
    • Version: 1.20250805 on 2025-08-05, with 35 votes
    • Previous CPAN version: 1.20250610 was 1 month, 25 days before
    • Author: MSIMERSON
  7. Module::CoreList - what modules shipped with versions of perl
    • Version: 5.20250803 on 2025-08-03, with 44 votes
    • Previous CPAN version: 5.20250720 was 13 days before
    • Author: BINGOS
  8. perl - The Perl 5 language interpreter
    • Version: 5.040003 on 2025-08-03, with 2150 votes
    • Author: SHAY
  9. Specio - Type constraints and coercions for Perl
    • Version: 0.52 on 2025-08-09, with 12 votes
    • Previous CPAN version: 0.51 was 1 month, 19 days before
    • Author: DROLSKY
  10. SPVM - The SPVM Language
    • Version: 0.990080 on 2025-08-06, with 36 votes
    • Author: KIMOTO
  11. Sys::Virt - libvirt Perl API
    • Version: v11.6.0 on 2025-08-04, with 17 votes
    • Previous CPAN version: v11.2.0 was 2 months before
    • Author: DANBERR
  12. Text::Balanced - Extract delimited text sequences from strings.
    • Version: 2.07 on 2025-08-03, with 16 votes
    • Previous CPAN version: 2.06 was 3 years, 1 month, 28 days before
    • Author: SHAY

(dcxi) metacpan weekly report - MCP

Niceperl

Published by prz on Saturday 09 August 2025 17:36

This is the weekly favourites list of CPAN distributions. Votes count: 51

Week's winner: MCP (+2)

Build date: 2025/08/09 15:36:12 GMT


Clicked for first time:


Increasing its reputation:

I started to develope SPVM::Mojolicious

dev.to #perl

Published by Yuki Kimoto on Saturday 09 August 2025 00:50

I started to develope SPVM::Mojolicious.

I released SPVM::Mojolicious on CPAN.

This is an early release.This project is incomplete.

We plan to make most Mojolicious features available.

As SPVM features, we'll be able to create groutines, threads, and executables.

This week in PSC (199) | 2025-08-07

blogs.perl.org

Published by Perl Steering Council on Friday 08 August 2025 15:36

Only Graham and Philippe attended. We coordinated with Aristotle via chat.

We only met to discuss the mailing-list moderation and immediate actions
(which resolved to sending an email to them moderators, and another one
to the list).

We also talked about moderation in general, and got some ideas to discuss
with the next PSC.

[P5P posting of this summary]

Caching in Perl

blogs.perl.org

Published by Mohammad Sajid Anwar on Thursday 07 August 2025 20:38


Caching with Redis/Valkey using Perl.
Please check out the link for more information:
https://theweeklychallenge.org/blog/caching-in-perl

SUSE Donates USD 11,500 to The Perl and Raku Foundation

perl.com

Published on Thursday 07 August 2025 09:55

The Perl and Raku Foundation (TPRF) is thrilled to announce a substantial $11,500 donation from SUSE, one of the world’s leading enterprise Linux and cloud-native and AI solutions providers. This generous contribution bolsters the Perl 5 Core Maintenance Fund and demonstrates SUSE’s commitment to the open-source ecosystem.

This donation from SUSE is actually made up of two parts. $10,000 is being donated by SUSE LLC and an additional $1,500 is being provided by The SUSE Open Source Network, to support the development and sustainability of Perl. This aligns with the network’s mission to empower and support open source communities.

Perl is a Fundamental Component of the SUSE Ecosystem

“At SUSE, Perl is a fundamental component and member of our ecosystem,” explains Miguel Pérez Colino, Director of Operations, Linux Product Management & Marketing. “We provide it as part of our Linux offerings by actively supporting Perl packages in SUSE Linux Enterprise and openSUSE. We use it extensively in our toolset, powering among others OpenQA and Open Build Service, this last one is used to build not just Linux packages but also Kubernetes.”

The Perl-Powered OpenQA and Open Build Service

SUSE’s OpenQA project is an automated testing framework that ensures quality across countless hardware configurations and software combinations. At its heart is Perl, orchestrating complex test scenarios with the reliability that system administrators have come to expect.

Similarly, Open Build Service is running on many services written in Perl. represents the modern evolution of package management, creating not just traditional Linux packages but also container images and Kubernetes distributions.

Sustaining the Digital Commons

SUSE’s donation is a demonstration of digital stewardship—the recognition that the tools we rely upon require active investment to remain secure, efficient, and relevant.

“We are proudly donating to The Perl and Raku Foundation (TPRF) to ensure Perl’s continued development and health, which is vital to the open-source world, we are part of, and we champion,” Colino continues.

This investment addresses some critical aspects of language maintenance:

Security Vigilance: In an era of increasing cyber threats, timely security patches aren’t optional—they’re essential. TPRF’s maintenance fund ensures that vulnerabilities can be addressed promptly, protecting countless systems worldwide.

Performance Evolution: Modern computing demands continue to evolve. The fund supports ongoing optimisation efforts that keep Perl competitive in today’s performance-conscious environment.

Platform Diversity: As computing platforms proliferate—from traditional servers to edge devices to cloud containers—Perl must remain compatible and efficient across this expanding landscape.

Community Responsiveness: Bug reports and feature requests from the global Perl community require careful evaluation and implementation. This fund ensures these contributions don’t languish unaddressed.

A Partnership Model for Open Source Sustainability

SUSE’s contribution represents more than financial support—it’s a blueprint for sustainable open-source stewardship. When organisations that build upon open-source foundations reinvest in those foundations, they create a virtuous cycle that benefits everyone. It’s a recognition that the digital commons we all depend upon flourish only through collective stewardship.

CVE-2025-40909

The Weekly Challenge

Published on Thursday 07 August 2025 00:00

In this post, we demonstrate how to reproduce CVE-2025-40909, a vulnerability in Perl related to working directories and thread behaviour.

Serialisation in Perl

blogs.perl.org

Published by Mohammad Sajid Anwar on Wednesday 06 August 2025 16:44


Comparative analysis of Storable and Sereal using Perl.
Please check out the link for more information:
https://theweeklychallenge.org/blog/serialisation-in-perl

PEVANS Core Perl 5: Grant Report for June/July 2025

Perl Foundation News

Published by alh on Wednesday 06 August 2025 10:07


Paul writes:

I didn't get any P5P work done in June, instead working on some other projects while awaiting the 5.42 release.

In July I've managed to continue some work on sub signatures improvements

  • 7 = Beginnings of named parameter handling in subroutine signatures
    • https://github.com/Perl/perl5/pull/23527
    • https://github.com/leonerd/perl5/tree/faster-signatures (work in progress branch)
  • 1 = Scalar-List-Utils resync with CPAN
    • https://github.com/Perl/perl5/pull/23500

Total: 8 hours

AWS Lambda

blogs.perl.org

Published by Mohammad Sajid Anwar on Wednesday 06 August 2025 04:17


Quick introduction to AWS Lambda using CLI, Python and Perl.
Please check out the link for more information:
https://theweeklychallenge.org/blog/aws-lambda

A Rusty Web? An Excursion of a Perl Guy into Rust Land

End Point Dev blog Perl topic

Published by Marco Pessotto on Tuesday 05 August 2025 00:00

Several rusty chains are tied into the side of a rusty metal structure.

In my programming career centered around web applications I’ve always used dynamic, interpreted languages: Perl, JavaScript, Python, and Ruby. However, I’ve always been curious about compiled, strongly typed languages and if they can be useful to me and to my clients. Based on my recent findings, Rust would be my first choice. It’s a modern language, has excellent documentation and it’s quite popular. However, it’s very different from the languages I know.

I read most of the book a couple of years ago, but given that I didn’t do anything with it, my knowledge quickly evaporated. This time I read the book and immediately after that I started to work on a non-trivial project involving downloading XML data from different sources, database operations, indexing and searching documents, and finally serving JSON over HTTP. My goal was to replace at least part of a Django application which seemed to have performance problems. The Django application uses Xapian (which is written in C++) via its bindings to provide the core functionality. Indexing documents would be delegated to a Celery task queue.

Unfortunately Xapian does not have bindings for Rust so far.

My reasoning was: I could use the PostgreSQL full text search feature instead of Xapian, simplifying the setup (updating a row would trigger an index update, instead of delegating the operation to Celery).

After reading the Rust book I truly liked the language. Its main feature is that it (normally) gives you no room for nasty memory management bugs which plague languages like C. Being compiled to machine code, it’s faster than interpreted languages by an order of magnitude. However, having to state the type of variables, arguments, and return values was at first kind of a culture shock, but I got used to it.

When writing Perl, I’m used to constructs like these:

if (my $res = download_url($url)) {
    ...
}

which are not possible any more. Instead you have to use the match construct and extract values from Option (Some/​None) and Result (Ok, Err) enumerations. This is the standard way to handle errors and variables which may or may not have values. There is nothing like an undef and this is one of the main Rust features. Instead, you need to cover all the cases with something like this:

match download_url(url.clone()) {
    Ok(res) => {
       ...
    },
    Err(e) => println!("Error {url}: {e}"),
}

Which can also be written as:

if let Ok(res) = download_url(url.clone()) {
    ...
}

You must be consistent with the values you are declaring and returning, and take care of the mutability and the borrowing of the values. In Rust you can’t have a piece of memory which can be modified in multiple places: for example, once you pass a string or a data structure to a function, you can’t use it any more. This is without a doubt a good thing. When in Perl you pass a reference of hash to a function, you don’t know what happens to it. Things can be modified as a side effect, and you are going to realize later at debugging time why that piece of data is not what you expect.

In Rust land, everything feels under control, and the compiler throws errors at you which most of the time make sense. It explains to you why you can’t use that variable at that point, and even suggests a fix. It’s amazing the amount of work behind the language and its ability to analyze the code.

The string management feels a bit weird because it’s normally anchored to the UTF-8 encoding, while e.g. Perl has an abstract way to handle it, so I’m used to thinking differently about it.

The async feature is nice, but present in most of the modern languages (Perl included!), so I don’t think that should be considered the main reason to use Rust.

Bottom line: I like the language. It’s very different to what I was used to, but I can see many advantages. The downside is that you can’t write all those “quick and dirty” scripts which are the daily bread of the sysadmin. It lacks that practical, informal approach I’m used to.

Once I got acquainted with the language, I went shopping for “crates” (which is what modules are called in Rust) here: https://www.arewewebyet.org/.

Lately I have a bit of a dislike for object–relational mappings (ORM), so I didn’t go with diesel nor sqlx, but I went straight for tokio_postgres.

This saved me quite a bit of documentation reading and gave me direct access to the database. Nothing weird to report here. It feels like using any other DB driver in any other language, with a statement, the placeholders and the arguments. The difference, of course, is that you need to care about the data types which are coming out of the DB (again the Option Enum is your friend and the error messages are helpful).

To get data from the Internet, reqwest did the trick just fine without any surprise.

For XML deserialization, serde was paired with quick-xml. This is one of the interesting bits.

You start defining your data structures like this:

use serde::Deserialize;

#[derive(Debug, Deserialize)]
struct OaiPmhResponse {
    #[serde(rename = "responseDate")]
    response_date: String,
    request: String,
    error: Option<ResponseError>,
    #[serde(rename = "ListRecords")]
    list_records: Option<ListRecords>,
}
// more definitions follow, to match the structure we expect

Then you feed the XML string to the from_str function like this:

use quick_xml::de::from_str;

fn parse_response (xml: &str) -> OaiPmhResponse {
    match from_str(xml) {
        Ok(res) => res,
        // return a dummy one with no records in it in case of errors
        Err(e) => OaiPmhResponse {
            response_date: String::from("NOW"),
            request: String::from("Invalid"),
            error: Some(ResponseError {
                code: String::from("Invalid XML"),
                message: e.to_string(),
            }),
            list_records: None,
        },
    }
}

which takes care of the parsing and gives you back either an Ok with the data structure you defined inside and the tags properly mapped, or an error. The structs can have methods attached so they provide a nice OOP-like encapsulation.

Once the data collection was successful, I moved to the web application itself.

I chose the Axum framework, maintained by the Tokio project and glued all the pieces together.

The core of the application is something like this:

#[derive(Serialize, Debug)]
struct Entry {
    entry_id: i32,
    rank: f32,
    title: String,
}

async fn search(
    State(pool): State<ConnectionPool>,
    Query(params): Query<HashMap<String, String>>,
) -> (StatusCode, Json<Vec::<Entry>>) {
    let conn = pool.get().await.expect("Failed to get a connection from the pool");
    let sql = r#"
SELECT entry_id, title, ts_rank_cd(search_vector, query) AS rank
FROM entry, websearch_to_tsquery($1) query
WHERE search_vector @@ query
ORDER BY rank DESC
LIMIT 10;
"#;
    let query = match params.get("query") {
        Some(value) => value,
        None => "",
    };
    let out = conn.query(sql, &[&query]).await.expect("Query should be valid")
        .iter().map(|row|
                    Entry {
                        entry_id: row.get(0),
                        title: row.get(1),
                        rank: row.get(2),
                    }).collect();
    tracing::debug!("{:?}", &out);
    (StatusCode::OK, Json(out))
}

Which simply runs the query using the input provided by the user, runs the full text search, and returns the serialized data as JSON.

During development it felt fast. The disappointment came when I populated the database with about 30,000 documents of various sizes. The Django application, despite returning more data and the facets, was still way faster. With the two applications running on the same (slow) machine I got a response in 925 milliseconds from the Rust application, and in 123 milliseconds for the Django one!

Now, most of the time is spent in the SQL query, so the race here is not Python vs. Rust, but Xapian vs. PostgreSQL’s full text search, with Xapian (Python is just providing an interface to the fast C++ code) winning by a large measure. Even if the Axum application is as fast as it can get, because it’s stripped to the bare minimum (it has no sessions, no authorization, no templates), the time saved is not enough to compensate for the lack of a dedicated and optimized full text search engine like Xapian. Of course I shouldn’t be too surprised.

To actually compete with Django + Xapian, I should probably use Tantivy, instead of relying on the PostgreSQL full text search. But that would be another adventure…

The initial plan turned out to be a failure, but this was really a nice and constructive excursion, as I could learn a new language, using its libraries to do common and useful tasks like downloading data, building small web applications, and interfacing with the database. Rust appears to have plenty of quality crates.

Beside the fact that this was just an excuse to study a new language, it remains true that rewriting existing, working applications is extremely unrewarding and most likely ineffective. Reaching parity with the current features requires a lot of time (and budget), and at the end of the story the gain could be minimal and better achieved with optimization (here I think about all our clients running Interchange).

However, if there is a need for a microservice doing a small task where speed is critical and where the application overhead should be minimal, Rust would be a viable option.

Important: Must be authorized to work in Spain.

Job description

As a Data Engineer, you will be responsible for designing, developing, and maintaining MultiSafepay's data infrastructure. Your primary focus will be ensuring that high-quality, scalable, and well-structured data is available to various teams within the organization. You will play a crucial role in optimizing data processes, managing databases, and setting up efficient data pipelines for seamless data accessibility.
What will you be doing?

• Develop and maintain platform built in Perl
• Set up, maintain, and optimize databases, data warehouses, and ElasticSearch indexes
• Design and implement scalable data pipelines for collecting, transforming, and storing data
• Ensure data integrity, security, and compliance with industry standards
• Automate data extraction, transformation, and loading (ETL) processes
• Work closely with data analysts, ensuring they have well-structured and accurate data for reporting
• Monitor and optimize database performance, resolving bottlenecks and inefficiencies

Perl 🐪 Weekly #732 - MetaCPAN Success Story

dev.to #perl

Published by Gabor Szabo on Monday 04 August 2025 05:23

Originally published at Perl Weekly 732

Hi there,

MetaCPAN's recent battle against mounting traffic abuse stands as a powerful testament to the resilience and ingenuity of open‑source infrastructure teams. After enduring recurring 503 outages that jeopardized service for Perl hackers worldwide, the MetaCPAN team embarked on a disciplined, data‑driven counterattack. What began with rudimentary logs, robots.txt tweaks and manual IP bans evolved into a robust partnership with Datadog and Fastly, enabling real‑time visibility and proactive defense. With the deployment of sophisticated rate‑limiting rules, user‑agent filtering, next‑generation WAF protections and a dynamic challenge system, MetaCPAN has successfully blocked some 80 percent of malicious traffic—including AI scrapers—while delivering a steady, reliable experience to legitimate users. This journey highlights how transparency, layered defense and smart automation can transform a crisis into an opportunity for stronger, more sustainable service.

Mark Gardner’s return to technical blogging marks a welcome revival of one of Perl’s clearest and most thoughtful voices.

Robert Acock created a mobile app, Heaven Vs Hell, written using react native and backend API's in Mojolicious. You can find it in Google Play and App Store.

Enjoy rest of the newsletter.

--
Your editor: Mohammad Sajid Anwar.

Announcements

Sydney August Meeting!

For all Perl Mongers in and around Sydney, please do join the next meetup.

Science Perl Journal DOIs are now live! Update on videos and next Issue of the SPJ

For all Science Perl Journal fan, please find list of permanent DOIs.

Articles

MetaCPAN's Traffic Crisis: An Eventual Success Story

MetaCPAN.org, the essential search engine for Perl’s CPAN repository has faced months of severe traffic issues that brought the service to its knees with frequent 503 errors.

Heaven Vs Hell

The mobile app written using react native and backend API's using Mojolicious.

Lightweight object-oriented Perl scripts: From modulinos to moodulinos

In Moodulinos, Mark Gardner offers a concise yet instructive journey through modern, lightweight Perl scripting by combining the time-tested modulino pattern with the expressive power of Moo.

Re: Wired on Perl and the virtue of humility

In his thoughtful response to Samuel Arbesman’s Wired piece, Mark Gardner reframes the conversation around Perl.

Discussion

Is it still worth adding installation instructions to a distribution?

This post is a thoughtful prompt for Perl developers maintaining CPAN modules.

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.

The Weekly Challenge - 333

Welcome to a new week with a couple of fun tasks "Straight Line" and "Duplicate Zeros". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 332

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Binary Date" and "Odd Letters" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

TWC332

Both solutions are compact and idiomatic Perl, ideal for scripting and competitive programming.

An Odd Date

A technically sound and idiomatic Raku solution with solid input handling, effective use of Raku’s expressive syntax, and clean logic.

Odd last date letters, binary word list buddy

The solutions are terse, elegant, and showcase modern Perl idioms. They shine in clarity for those familiar with Perl 5.42+, especially with sprintf and all.

Perl Weekly Challenge: Week 332

The Raku version shows off the expressive power of high-level language features (like Bag and junctions) in a tight one-liner. The Perl version is longer but more transparent to a general audience, especially Perl learners.

Binary Regularities

A technically impressive post. Task 1 is robust and production-ready. Task 2 is a brilliant regex stunt — best appreciated as a learning artifact.

quick and easy

A well-executed and educationally valuable post. It demonstrates strong language fluency and a commitment to practical polyglot coding. Both Raku and SQL solutions are standout examples of expressive minimalism, while PL/Java and Python offer accessible, mainstream approaches.

Perl Weekly Challenge 332

is well-written, robust and idiomatic Perl. Task 1 stands out for its thorough validation and error handling. Task 2 is concise and logically correct.

Binary + Odd = XOR

The post is a well-structured, technically sound and Perl-fluent exploration of the weekly challenge. It not only solves both tasks concisely but also offers insight into language features, performance trade-offs and idiomatic Perl practices.

Oddly Binary

Accurate and efficient solutions in Perl, Raku, Python, and Elixir. Demonstrates strong understanding of each language’s syntax and standard libraries. Clear separation of concerns and well-structured code snippets.

Base 2 dates and odd words

A strong, idiomatic Perl solution to both problems—optimized, correct and pleasantly readable. This write-up reflects deep Perl familiarity and attention to corner cases.

The Weekly Challenge #332

These are technically solid, idiomatic and well-documented. It balances clarity, efficiency and modern Perl features effectively.

Odd Date

The post delivers a compact and well-structured solution set, with a focus on language expressiveness, functional style and algorithmic clarity. It's especially valuable for readers interested in cross-language comparisons rather than Perl-only perspectives.

I sent my date a letter

It delivers solid, minimal and idiomatic solutions in both Python and Perl. The implementations are exactly in line with typical weekly challenge style: clean, correct and easily accessible to other coders.

Hypertime

It is engaging, technically sound and reflects a solid grasp of Raku’s expressive features, especially hyper operators and Bags.

Rakudo

2025.30 A Hexagonal Week

Weekly collections

NICEPERL's lists

Great CPAN modules released last week.

Events

Paris.pm monthly meeting

August 13, 2025

Paris.pm monthly meeting

September 10, 2025

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

Weekly Challenge: I sent my date a letter

dev.to #perl

Published by Simon Green on Sunday 03 August 2025 07:53

Weekly Challenge 332

Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. It's a great way for us all to practice some coding.

Challenge, My solutions

Task 1: Binary Date

Task

You are given a date in the format YYYY-MM-DD.

Write a script to convert it into binary date.

My solution

For this task, I do these steps.

  1. Split the date from input_string on hyphens into the date_parts list (array in Perl).
  2. Convert each part from an integer to binary value. This is stored as binary_parts.
  3. Join the binary_parts list in a single string separated by hyphens.

Python

def binary_date(input_string: str) -> str:
    date_parts = input_string.split('-')
    binary_parts = [bin(int(part))[2:] for part in date_parts]
    return '-'.join(binary_parts)

Perl

sub main ($input_string) {
    my @date_parts = split /-/, $input_string;
    my @binary_parts = map { sprintf( "%b", $_ ) } @date_parts;
    say join( '-', @binary_parts );
}

Examples

$ ./ch-1.py 2025-07-26
11111101001-111-11010

$ ./ch-1.py 2000-02-02
11111010000-10-10

$ ./ch-1.py 2024-12-31
11111101000-1100-11111

Task 2: Odd Letters

Task

You are given a string.

Write a script to find out if each letter in the given string appeared odd number of times.

My solution

Python has the Counter method that will convert an iterable (like a string) and create a dictionary with the frequency of each letter. I then use the all function to check if all letters occur an odd number of times.

def odd_letters(input_string: str) -> bool:
    freq = Counter(input_string)
    return all(count % 2 == 1 for count in freq.values())

Perl doesn't have an equivalent of the Counter function, so I do that part by hand.

sub main ($input_string) {
    my %freq = ();
    for my $char ( split //, $input_string ) {
        $freq{$char}++;
    }

    my $all_odd = all { $_ % 2 == 1 } values %freq;
    say $all_odd ? "true" : "false";
}

Examples

$ ./ch-2.py weekly
False

$ ./ch-2.py perl
True

$ ./ch-2.py challenge
False

(dlix) 8 great CPAN modules released last week

Niceperl

Published by prz on Saturday 02 August 2025 22:15

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::rdapper - a simple console-based RDAP client.
    • Version: 1.18 on 2025-07-29, with 20 votes
    • Previous CPAN version: 1.17 was 7 days before
    • Author: GBROWN
  2. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20250731.002 on 2025-07-31, with 23 votes
    • Previous CPAN version: 20250730.001 was 1 day before
    • Author: BRIANDFOY
  3. Crypt::CBC - Encrypt Data with Cipher Block Chaining Mode
    • Version: 3.07 on 2025-07-27, with 12 votes
    • Previous CPAN version: 3.06 was before
    • Author: TIMLEGGE
  4. JSON::Schema::Modern - Validate data against a schema using a JSON Schema
    • Version: 0.616 on 2025-07-26, with 12 votes
    • Previous CPAN version: 0.615 was 14 days before
    • Author: ETHER
  5. Net::DNS - Perl Interface to the Domain Name System
    • Version: 1.52 on 2025-07-29, with 28 votes
    • Previous CPAN version: 1.51_04 was 1 day before
    • Author: NLNETLABS
  6. PAR - Perl Archive Toolkit
    • Version: 1.021 on 2025-07-31, with 19 votes
    • Previous CPAN version: 1.020 was 1 year, 4 months, 27 days before
    • Author: RSCHUPP
  7. Proc::ProcessTable - Perl extension to access the unix process table
    • Version: 0.637 on 2025-07-28, with 21 votes
    • Previous CPAN version: 0.636 was 2 years, 1 month, 7 days before
    • Author: JWB
  8. Scalar::List::Utils - Common Scalar and List utility subroutines
    • Version: 1.70 on 2025-07-30, with 181 votes
    • Previous CPAN version: 1.69 was 3 months, 28 days before
    • Author: PEVANS

Raku 6.e ~ Will Coleda ~ TPRC 2025 ~ Lightning Talk

The Perl and Raku Conference YouTube channel

Published by The Perl and Raku Conference - Greenville, SC 2025 on Saturday 02 August 2025 15:14

Perl 5.42: New Features ~ Karl Williamson ~ TPRC 2025

The Perl and Raku Conference YouTube channel

Published by The Perl and Raku Conference - Greenville, SC 2025 on Saturday 02 August 2025 15:00

6 Programming Language Updates You Shouldn’t Ignore (July 2025 Edition)

Perl on Medium

Published by The CS Engineer on Saturday 02 August 2025 11:11

From Perl 5.42 to OpenSilver 3.2, July saw meaningful updates for developers. Here are 6 that are actually worth knowing.

CPANTesters Update ~ D Ruth Holloway

The Perl and Raku Conference YouTube channel

Published by The Perl and Raku Conference - Greenville, SC 2025 on Friday 01 August 2025 14:37

Raku Next Steps: Hypersonic ~ Bruce Gray ~ TPRC 2025

The Perl and Raku Conference YouTube channel

Published by The Perl and Raku Conference - Greenville, SC 2025 on Friday 01 August 2025 14:25

What is Unicode? ~ Karl Williamson ~ TPRC 2025 ~

The Perl and Raku Conference YouTube channel

Published by The Perl and Raku Conference - Greenville, SC 2025 on Friday 01 August 2025 14:22

MetaCPAN's Traffic Crisis: An Eventual Success Story

perl.com

Published on Monday 28 July 2025 22:01

"Amelia's Sad Face" by donnierayjones is licensed under CC BY 2.0 .

MetaCPAN.org, the essential search engine for Perl’s CPAN repository, has faced months of severe traffic issues that brought the service to its knees with frequent 503 errors. Here’s how the team fought back against an army of misbehaving bots and hostile traffic.

The Problem Emerges

MetaCPAN began experiencing multiple 503 service errors daily, disrupting access for legitimate Perl developers worldwide. Traditional monitoring failed to identify the root cause of traffic spikes overwhelming the infrastructure.

Initial Investigation Phase

The team implemented basic monitoring and took preliminary defensive measures:

  • Deployed uWSGI stats monitoring tools to track application performance
  • Updated robots.txt to explicitly list bots and specify crawling restrictions
  • Began manual IP blocking of obvious bad actors
  • Attempted to deploy Anubis rate limiting (ultimately failed and was rolled back)

The Datadog Breakthrough

Datadog logo Fastly logo

Partnership with Datadog transformed visibility into the problem:

  • Established comprehensive logging pipeline sending Fastly CDN logs for both web and API services to Datadog
  • Deployed Kubernetes Datadog agent to cluster
  • Created public dashboard showing real-time traffic metrics
  • Built private dashboard specifically to identify problematic IPs and user agents

Result: Finally able to see the enemy—specific IP ranges (particularly from Alibaba.com) and user agents generating massive request volumes. However, manual blocking proved unsustainable, requiring constant vigilance and rapid response.

Escalating Defences

The team implemented more sophisticated blocking:

  • Deployed VCL snippets in Fastly to block based on user agents (later replaced with Next Gen WAF)
  • Blocked extensive IP ranges using Fastly’s IP Block list feature
  • Implemented additional request rate limiting
  • Partnered with Fastly for free enterprise services including DDoS protection

Limitation: Manual processes couldn’t keep pace with evolving attack patterns.

Next-Generation WAF Implementation

Deployment of Fastly’s Web Application and API Protection:

  • Enabled next-gen WAF to automatically identify and block suspect bots
  • Implemented categorical blocking for known bad traffic types
  • Reduced manual intervention requirements significantly

Progress: Noticeable improvement, but sophisticated attacks still overwhelmed the service during peak periods.

The Dynamic Challenge Solution

Final defensive layer was activated:

  • Deployed Fastly’s Dynamic Challenge WAF feature
  • Intelligent challenge system filtered automated bots whilst allowing legitimate users through
  • Dramatic reduction in successful attacks reaching MetaCPAN infrastructure

Current State: Victory Through Data

Bad bots traffic visualization

Traffic challenges chart

Today’s public Datadog dashboard tells the success story in real-time metrics:

In the last week the number of requests handled broke down as follows:

  • 5,190,000 bad bot requests (this includes AI scrapers) blocked
  • 3,290,000 challenges issued
  • 579,000 requests rate limited
  • 1,720,000 legitimate requests served (much of this from Fastly’s CDN cache), with the remainder reaching the origin servers and being successfully served to end users.

So about 80% of all traffic is now blocked.

The numbers demonstrate the scale of the threat MetaCPAN faced and the effectiveness of the layered defence strategy.

We have RSS feeds and a dedicated API which can be easily accessed through MetaCPAN::Client for anyone who wants to get data from us without scraping the site. We do ask that people register their user agent.

Community Heroes

This infrastructure battle was won through generous community support:

Break down of steps can be found on ticket https://github.com/metacpan/metacpan-k8s/issues/154

Fastly and Datadog deserve particular recognition for donating enterprise-grade services. Without these contributions, MetaCPAN couldn’t operate at the required scale and reliability.

Additional sponsors listed at https://metacpan.org/about/sponsors continue supporting this vital community resource, though operational costs remain significant.

How to help: The Perl community can support MetaCPAN’s ongoing operations through https://opencollective.com/metacpan-core, ensuring this essential service remains available for all developers.

Analysing FIT data with Perl: interactive data analysis

perl.com

Published on Sunday 27 July 2025 11:46

Printing statistics to the terminal or plotting data extracted from FIT files is all well and good. One problem is that the feedback loops are long. Sometimes questions are better answered by playing with the data directly. Enter the Perl Data Language.

Interactive data analysis

For more fine-grained analysis of our FIT file data, it’d be great to be able to investigate it interactively. Other languages such as Ruby, Raku and Python have a built-in REPL.1 Yet Perl doesn’t.2 But help is at hand! PDL (the Perl Data Language) is designed to be used interactively and thus has a REPL we can use to manipulate and investigate our activity data.3

Getting set up

Before we can use PDL, we’ll have to install it:

$ cpanm PDL

After it has finished installing (this can take a while), you’ll be able to start the perlDL shell with the pdl command:

perlDL shell v1.357
 PDL comes with ABSOLUTELY NO WARRANTY. For details, see the file
 'COPYING' in the PDL distribution. This is free software and you
 are welcome to redistribute it under certain conditions, see
 the same file for details.
ReadLines, NiceSlice, MultiLines  enabled
Reading PDL/default.perldlrc...
Found docs database /home/vagrant/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/x86_64-linux/PDL/pdldoc.db
Type 'help' for online help
Type 'demo' for online demos
Loaded PDL v2.100 (supports bad values)

Note: AutoLoader not enabled ('use PDL::AutoLoader' recommended)

pdl>

To exit the pdl shell, enter Ctrl-D at the prompt and you’ll be returned to your terminal.

Cleaning up to continue

To manipulate the data in the pdl shell, we want to be able to call individual routines from the geo-fit-plot-data.pl script. This way we can use the arrays that some of the routines return to initialise PDL data objects.

It’s easier to manipulate the data if we get ourselves a bit more organised first.4 In other words, we need to extract the routines into a module, which will make calling the code we created earlier from within pdl much easier.

Before we create a module, we need to do some refactoring. One thing that’s been bothering me is the way the plot_activity_data() subroutine also parses and manipulates date/time data. This routine should be focused on plotting data, not on massaging its requirements into the correct form. Munging the date/time data is something that should happen in its own routine. This way we encapsulate the concepts and abstract away the details. Another way of saying this is that the plotting routine shouldn’t “know” how to manipulate date/time information to do its job.

To this end, I’ve moved the time extraction code into a routine called get_time_data():

sub get_time_data {
    my @activity_data = @_;

    # get the epoch time for the first point in the time data
    my @timestamps = map { $_->{'timestamp'} } @activity_data;
    my $first_epoch_time = $date_parser->parse_datetime($timestamps[0])->epoch;

    # convert timestamp data to elapsed minutes from start of activity
    my @times = map {
        my $dt = $date_parser->parse_datetime($_);
        my $epoch_time = $dt->epoch;
        my $elapsed_time = ($epoch_time - $first_epoch_time)/60;
        $elapsed_time;
    } @timestamps;

    return @times;
}

The main change here in comparison to the previous version of the code is that we pass the activity data as an argument to get_time_data(), returning the @times array to the calling code.

The code creating the date string used in the plot title now also resides in its own function:

sub get_date {
    my @activity_data = @_;

    # determine date from timestamp data
    my @timestamps = map { $_->{'timestamp'} } @activity_data;
    my $dt = $date_parser->parse_datetime($timestamps[0]);
    my $date = $dt->strftime("%Y-%m-%d");

    return $date;
}

Where again, we’re passing the @activity_data array to the function. It then returns the $date string that we use in the plot title.

Both of these routines use the $date_parser object, which I’ve extracted into a constant in the main script scope:

my $date_parser = DateTime::Format::Strptime->new(
    pattern => "%Y-%m-%dT%H:%M:%SZ",
    time_zone => 'UTC',
);

Making a mini-module

It’s time to make our module. I’m not going to create the full Perl module infrastructure here, as it’s not necessary for our current goal. I want to import a module called Geo::FIT::Utils and then access the functions that it provides.5 Thus–in an appropriate project directory–we need to create a file called lib/Geo/FIT/Utils.pm as well as its associated path:

$ mkdir -p lib/Geo/FIT
$ touch lib/Geo/FIT/Utils.pm

Opening the file in an editor and entering this stub module code:

 1package Geo::FIT::Utils;
 2
 3use Exporter 5.57 'import';
 4
 5
 6our @EXPORT_OK = qw(
 7    extract_activity_data
 8    show_activity_statistics
 9    plot_activity_data
10    get_time_data
11    num_parts
12);
13
141;

we now have the scaffolding of a module that (at least, theoretically) exports the functionality we need.

Line 1 specifies the name of the module. Note that the module’s name must match its path on the filesystem, hence why we created the file Geo/FIT/Utils.pm.

We import the Exporter module (line 3) so that we can specify the functions to export. This is the @EXPORT_OK array’s purpose (lines 6-12).

Finally, we end the file on line 14 with the code 1;. This line is necessary so that importing the package (which in this case is also a module) returns a true value. The value 1 is synonymous with Boolean true in Perl, hence why it’s best practice to end module files with 1;.

Copying all the code except the main() routine from geo-fit-plot-data.pl into Utils.pm, we end up with this:

package Geo::FIT::Utils;

use strict;
use warnings;

use Exporter 5.57 'import';
use Geo::FIT;
use Scalar::Util qw(reftype);
use List::Util qw(max sum);
use Chart::Gnuplot;
use DateTime::Format::Strptime;


my $date_parser = DateTime::Format::Strptime->new(
    pattern => "%Y-%m-%dT%H:%M:%SZ",
    time_zone => 'UTC',
);

sub extract_activity_data {
    my $fit = Geo::FIT->new();
    $fit->file( "2025-05-08-07-58-33.fit" );
    $fit->open or die $fit->error;

    my $record_callback = sub {
        my ($self, $descriptor, $values) = @_;
        my @all_field_names = $self->fields_list($descriptor);

        my %event_data;
        for my $field_name (@all_field_names) {
            my $field_value = $self->field_value($field_name, $descriptor, $values);
            if ($field_value =~ /[a-zA-Z]/) {
                $event_data{$field_name} = $field_value;
            }
        }

        return \%event_data;
    };

    $fit->data_message_callback_by_name('record', $record_callback ) or die $fit->error;

    my @header_things = $fit->fetch_header;

    my $event_data;
    my @activity_data;
    do {
        $event_data = $fit->fetch;
        my $reftype = reftype $event_data;
        if (defined $reftype && $reftype eq 'HASH' && defined %$event_data{'timestamp'}) {
            push @activity_data, $event_data;
        }
    } while ( $event_data );

    $fit->close;

    return @activity_data;
}

# extract and return the numerical parts of an array of FIT data values
sub num_parts {
    my $field_name = shift;
    my @activity_data = @_;

    return map { (split ' ', $_->{$field_name})[0] } @activity_data;
}

# return the average of an array of numbers
sub avg {
    my @array = @_;

    return (sum @array) / (scalar @array);
}

sub show_activity_statistics {
    my @activity_data = @_;

    print "Found ", scalar @activity_data, " entries in FIT file\n";
    my $available_fields = join ", ", sort keys %{$activity_data[0]};
    print "Available fields: $available_fields\n";

    my $total_distance_m = (split ' ', ${$activity_data[-1]}{'distance'})[0];
    my $total_distance = $total_distance_m/1000;
    print "Total distance: $total_distance km\n";

    my @speeds = num_parts('speed', @activity_data);
    my $maximum_speed = max @speeds;
    my $maximum_speed_km = $maximum_speed*3.6;
    print "Maximum speed: $maximum_speed m/s = $maximum_speed_km km/h\n";

    my $average_speed = avg(@speeds);
    my $average_speed_km = sprintf("%0.2f", $average_speed*3.6);
    $average_speed = sprintf("%0.2f", $average_speed);
    print "Average speed: $average_speed m/s = $average_speed_km km/h\n";

    my @powers = num_parts('power', @activity_data);
    my $maximum_power = max @powers;
    print "Maximum power: $maximum_power W\n";

    my $average_power = avg(@powers);
    $average_power = sprintf("%0.2f", $average_power);
    print "Average power: $average_power W\n";

    my @heart_rates = num_parts('heart_rate', @activity_data);
    my $maximum_heart_rate = max @heart_rates;
    print "Maximum heart rate: $maximum_heart_rate bpm\n";

    my $average_heart_rate = avg(@heart_rates);
    $average_heart_rate = sprintf("%0.2f", $average_heart_rate);
    print "Average heart rate: $average_heart_rate bpm\n";
}

sub plot_activity_data {
    my @activity_data = @_;

    # extract data to plot from full activity data
    my @times = get_time_data(@activity_data);
    my @heart_rates = num_parts('heart_rate', @activity_data);
    my @powers = num_parts('power', @activity_data);

    # plot data
    my $date = get_date(@activity_data);
    my $chart = Chart::Gnuplot->new(
        output => "watopia-figure-8-heart-rate-and-power.png",
        title  => "Figure 8 in Watopia on $date: heart rate and power over time",
        xlabel => "Elapsed time (min)",
        ylabel => "Heart rate (bpm)",
        terminal => "png size 1024, 768",
        xtics => {
            incr => 5,
        },
        ytics => {
            mirror => "off",
        },
        y2label => 'Power (W)',
        y2range => [0, 1100],
        y2tics => {
            incr => 100,
        },
    );

    my $heart_rate_ds = Chart::Gnuplot::DataSet->new(
        xdata => \@times,
        ydata => \@heart_rates,
        style => "lines",
    );

    my $power_ds = Chart::Gnuplot::DataSet->new(
        xdata => \@times,
        ydata => \@powers,
        style => "lines",
        axes => "x1y2",
    );

    $chart->plot2d($power_ds, $heart_rate_ds);
}

sub get_time_data {
    my @activity_data = @_;

    # get the epoch time for the first point in the time data
    my @timestamps = map { $_->{'timestamp'} } @activity_data;
    my $first_epoch_time = $date_parser->parse_datetime($timestamps[0])->epoch;

    # convert timestamp data to elapsed minutes from start of activity
    my @times = map {
        my $dt = $date_parser->parse_datetime($_);
        my $epoch_time = $dt->epoch;
        my $elapsed_time = ($epoch_time - $first_epoch_time)/60;
        $elapsed_time;
    } @timestamps;

    return @times;
}

sub get_date {
    my @activity_data = @_;

    # determine date from timestamp data
    my @timestamps = map { $_->{'timestamp'} } @activity_data;
    my $dt = $date_parser->parse_datetime($timestamps[0]);
    my $date = $dt->strftime("%Y-%m-%d");

    return $date;
}

our @EXPORT_OK = qw(
    extract_activity_data
    show_activity_statistics
    plot_activity_data
    get_time_data
    num_parts
);

1;

… which is what we had before, but put into a nice package for easier use.

One upside to having put all this code into a module is that the geo-fit-plot-data.pl script is now much simpler:

use strict;
use warnings;

use Geo::FIT::Utils qw(
    extract_activity_data
    show_activity_statistics
    plot_activity_data
);


sub main {
    my @activity_data = extract_activity_data();

    show_activity_statistics(@activity_data);
    plot_activity_data(@activity_data);
}

main();

Poking and prodding

We’re now ready to investigate our power and heart rate data interactively!

Start pdl and enter use lib 'lib' at the pdl> prompt so that it can find our new module:6

$ pdl
perlDL shell v1.357
 PDL comes with ABSOLUTELY NO WARRANTY. For details, see the file
 'COPYING' in the PDL distribution. This is free software and you
 are welcome to redistribute it under certain conditions, see
 the same file for details.
ReadLines, NiceSlice, MultiLines  enabled
Reading PDL/default.perldlrc...
Found docs database /home/vagrant/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/x86_64-linux/PDL/pdldoc.db
Type 'help' for online help
Type 'demo' for online demos
Loaded PDL v2.100 (supports bad values)

Note: AutoLoader not enabled ('use PDL::AutoLoader' recommended)

pdl> use lib 'lib'

Now import the functions we wish to use:

pdl> use Geo::FIT::Utils qw(extract_activity_data get_time_data num_parts)

Since we need the activity data from the FIT file to pass to the other routines, we grab it and put it into a variable:

pdl> @activity_data = extract_activity_data

We also need to load the time data:

pdl> @times = get_time_data(@activity_data)

which we can then read into a PDL array:

pdl> $time = pdl \@times

With the time data in a PDL array, we can manipulate it more easily. For instance, we can display elements of the array with the PDL print statement in combination with the splice() method. The following code shows the last five elements of the $time array:

pdl> print $time->slice("-1:-5")
[54.5333333333333 54.5166666666667 54.5 54.4833333333333 54.4666666666667]

Loading power output and heart rate data into PDL arrays works similarly:

pdl> @powers = num_parts('power', @activity_data)

pdl> $power = pdl \@powers

pdl> @heart_rates = num_parts('heart_rate', @activity_data)

pdl> $heart_rate = pdl \@heart_rates

In the previous article, we wanted to know what the maximum power was for the second sprint. Here’s the graph again for context:

Plot of heart rate and power versus elapsed time in minutes

Eyeballing the graph from above, we can see that the second sprint occurred between approximately 47 and 48 minutes elapsed time. We know that the arrays of time and power data all have the same length. Thus, if we find out the indices of the $time array between these times, we can use them to select the corresponding power data. To get array indices for known data values, we use the PDL which command:

pdl> $indices = which(47 < $time & $time < 48)

pdl> print $indices
[2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835
 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850
 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865
 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879]

We can check that we’ve got the correct range of time values by passing the $indices array as a slice of the $time array:

pdl> print $time($indices)
[47.0166666666667 47.0333333333333 47.05 47.0666666666667 47.0833333333333
 47.1 47.1166666666667 47.1333333333333 47.15 47.1666666666667
 47.1833333333333 47.2 47.2166666666667 47.2333333333333 47.25
 47.2666666666667 47.2833333333333 47.3 47.3166666666667 47.3333333333333
 47.35 47.3666666666667 47.3833333333333 47.4 47.4166666666667
 47.4333333333333 47.45 47.4666666666667 47.4833333333333 47.5
 47.5166666666667 47.5333333333333 47.55 47.5666666666667 47.5833333333333
 47.6 47.6166666666667 47.6333333333333 47.65 47.6666666666667
 47.6833333333333 47.7 47.7166666666667 47.7333333333333 47.75
 47.7666666666667 47.7833333333333 47.8 47.8166666666667 47.8333333333333
 47.85 47.8666666666667 47.8833333333333 47.9 47.9166666666667
 47.9333333333333 47.95 47.9666666666667 47.9833333333333]

The time values lie between 47 and 48, so we can conclude that we’ve selected the correct indices.

Note that we have to use the bitwise logical AND operator here because it operates on an element-by-element basis across the array.

Selecting $power array values at these indices is as simple as passing the $indices array as a slice:

pdl> print $power($indices)
[229 231 232 218 210 204 255 252 286 241 231 237 260 256 287 299 318 337 305
 276 320 289 280 301 320 303 395 266 302 341 299 287 309 279 294 284 266 281
 367 497 578 512 762 932 907 809 821 847 789 740 657 649 722 715 669 657 705
 643 647]

Using the max() method on this output gives us the maximum power:

pdl> print $power($indices)->max
932

In other words, the maximum power for the second sprint was 932 W. Not as good as the first sprint (which achieved 1023 W), but I was getting tired by this stage.

The same procedure allows us to find the maximum power for the first sprint with PDL. Again, eyeballing the graph above, we can see that the peak for the first sprint occurred between 24 and 26 minutes. Constructing the query in PDL, we have

pdl> print $power(which(24 < $time & $time < 26))->max
1023

which gives the maximum power value we expect.

We can also find out the maximum heart rate values around these times. E.g. for the first sprint:

pdl> print $heart_rate(which(24 < $time & $time < 26))->max
157

in other words, 157 bpm. For the second sprint, we have:

pdl> print $heart_rate(which(47 < $time & $time < 49))->max
165

i.e. 165 bpm, which matches the value that we found earlier. Note that I broadened the range of times to search over heart rate data here because its peak occurred a bit after the power peak for the second sprint.

Looking forward

Where to from here? Well, we could extend this code to handle processing multiple FIT files. This would allow us to find trends over many activities and longer periods. Perhaps there are other data sources that one could combine with longer trends. For instance, if one has access to weight data over time, then it’d be possible to work out things like power-to-weight ratios. Maybe looking at power and heart rate trends over a longer time can identify things such as overtraining. I’m not a sport scientist, so I don’t know how to go about that, yet it’s a possibility. Since we’ve got fine-grained, per-ride data, if we can combine this with longer-term analysis, there are probably many more interesting tidbits hiding in there that we can look at and think about.

Open question

One thing I haven’t been able to work out is where the calorie information is. As far as I can tell, Zwift calculates how many calories were burned during a given ride. Also, if one uploads the FIT file to a service such as Strava, then it too shows calories burned and the value is the same. This would imply that Strava is only displaying a value stored in the FIT file. So where is the calorie value in the FIT data? I’ve not been able to find it in the data messages that Geo::FIT reads, so I’ve no idea what’s going on there.

Conclusion

What have we learned? We’ve found out how to read, analyse and plot data from Garmin FIT files all by using Perl modules. Also, we’ve learned how to investigate the data interactively by using the PDL shell. Cool!

One main takeaway that might not be obvious is that you don’t really need online services such as Strava. You should now have the tools to process, analyse and visualise data from your own FIT files. With Geo::FIT, Chart::Gnuplot and a bit of programming, you can glue together the components to provide much of the same (and in some cases, more) functionality yourself.

I wish you lots of fun when playing around with FIT data!


  1. REPL stands for read-eval-print loop and is an environment where one can interactively enter programming language commands and manipulate data. ↩︎

  2. It is, however, possible to (ab)use the Perl debugger and use it as a kind of REPL. Enter perl -de0 and you’re in a Perl environment much like REPLs in other languages. ↩︎

  3. Many thanks to Harald Jörg for pointing this out to me at the recent German Perl and Raku Workshop↩︎

  4. This is an application of “first make the change easy, then make the easy change” (paraphrasing Kent Beck). An important point often overlooked in this quote is that making the change easy can be hard. ↩︎

  5. Not a particularly imaginative name, I know. ↩︎

  6. The documented way to add a path to @INC in pdl is via the -Ilib command line option. Unfortunately, this didn’t work in my test environment: the local lib/ path wasn’t added to @INC and hence using the Geo::FIT::Utils module failed with the error that it couldn’t be located. ↩︎

(dlviii) 7 great CPAN modules released last week

Niceperl

Published by prz on Sunday 27 July 2025 00:22

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::rdapper - a simple console-based RDAP client.
    • Version: 1.17 on 2025-07-22, with 20 votes
    • Previous CPAN version: 1.16 was before
    • Author: GBROWN
  2. Devel::Cover - Code coverage metrics for Perl
    • Version: 1.51 on 2025-07-26, with 103 votes
    • Previous CPAN version: 1.50 was 1 month, 16 days before
    • Author: PJCJ
  3. Mac::PropertyList - work with Mac plists at a low level
    • Version: 1.604 on 2025-07-23, with 13 votes
    • Previous CPAN version: 1.603_05 was 2 days before
    • Author: BRIANDFOY
  4. Module::CoreList - what modules shipped with versions of perl
    • Version: 5.20250720 on 2025-07-20, with 44 votes
    • Previous CPAN version: 5.20250702 was 17 days before
    • Author: BINGOS
  5. Path::Tiny - File path utility
    • Version: 0.150 on 2025-07-21, with 191 votes
    • Previous CPAN version: 0.149 was before
    • Author: DAGOLDEN
  6. Plack::Middleware::Session - Middleware for session management
    • Version: 0.36 on 2025-07-23, with 40 votes
    • Previous CPAN version: 0.35 was 15 days before
    • Author: MIYAGAWA
  7. Text::CSV_XS - Comma-Separated Values manipulation routines
    • Version: 1.61 on 2025-07-26, with 103 votes
    • Previous CPAN version: 1.60 was 5 months, 26 days before
    • Author: HMBRAND

(dcx) metacpan weekly report - Mojolicious

Niceperl

Published by prz on Sunday 27 July 2025 00:20

This is the weekly favourites list of CPAN distributions. Votes count: 74

Week's winner: Mojolicious (+3)

Build date: 2025/07/26 22:19:25 GMT


Clicked for first time:


Increasing its reputation:

Day 55 of 100 Innovations!

Perl on Medium

Published by Atikin Verse on Friday 25 July 2025 15:58

🦪✨ PERLfectly Smooth Coding is Here! ✨🦪

Proxmox Donates €10,000 to The Perl and Raku Foundation

perl.com

Published on Wednesday 23 July 2025 10:24

The Perl and Raku Foundation (TPRF) is delighted to announce a generous €10,000 donation from Proxmox Server Solutions GmbH, supporting the critical Perl 5 Core Maintenance Fund. Corporate partnerships play a critical role in enabling TPRF to fulfill its mission.

A Partner in Open Source

Proxmox Virtual Environment is a complete, open-source server management platform for enterprise virtualization. It tightly integrates the KVM hypervisor and Linux Containers (LXC), software-defined storage and networking functionality, on a single platform.

Proxmox is an example of an open source company that has built enterprise-grade virtualization technology while maintaining transparency, community engagement, and accessibility as core principles.

Sustaining a Foundation of Modern Computing

The Perl programming language remains a cornerstone of system administration, bioinformatics, web development, and countless other critical applications across industries. However, the ongoing maintenance and development of Perl’s core requires dedicated funding to ensure continued stability, security updates, and feature enhancements.

TPRF is dedicated to the advancement of the Perl and Raku programming languages, through open discussion, collaboration, design, and code. This mission extends beyond language development to encompass community building, educational initiatives, and the crucial task of maintaining the robust infrastructure that millions of applications depend upon.

The Critical Nature of Core Maintenance

Without sustained funding for core maintenance work, even the most established programming languages risk stagnation or security vulnerabilities. The Perl 5 Core Maintenance Fund specifically addresses:

  • Security Updates: Ensuring timely patches for discovered vulnerabilities
  • Performance Optimizations: Maintaining competitive execution speeds
  • Platform Compatibility: Supporting new operating systems and architectures
  • Bug Resolution: Addressing issues reported by the global Perl community
  • Documentation Maintenance: Keeping comprehensive guides current and accessible

Proxmox’s contribution directly enables this essential work to continue uninterrupted, demonstrating a forward-thinking approach to technology stewardship.

A Shared Vision for Open Source Sustainability

This donation reflects a broader understanding within the technology industry that sustainable open source ecosystems require active investment from organizations that benefit from these tools. As companies increasingly seek cost-effective alternatives to proprietary solutions, the importance of maintaining robust open source alternatives becomes paramount.

Looking Forward

By investing in Perl’s continued development, Proxmox contributes to a programming language ecosystem that serves developers, system administrators, and organizations worldwide.

This partnership comes at a crucial time for TPRF. Community support from sponsors like Proxmox enables the foundation to maintain its diverse portfolio of community-serving initiatives.

In today’s ever-evolving technological landscape, system administrators and IT professionals face complex challenges when managing and…

Guide: Pocket PERL During the Upcoming Token Shower

Perl on Medium

Published by Perlin on Wednesday 16 July 2025 08:59

Step-by-step directions to secure PERL through this token shower.

Maintaining Perl 5 Core (Dave Mitchell): June 2025

Perl Foundation News

Published by alh on Monday 14 July 2025 12:13


Dave writes:

I spent last month working on rewriting and modernising perlxs.pod, Perl's reference manual for XS.

It's still a work-in-progress, so nothing's been pushed yet.

Summary: * 49:49 modernise perlxs.pod

Total: * 49:49 TOTAL (HH::MM)

Maintaining Perl (Tony Cook) May 2025

Perl Foundation News

Published by alh on Monday 14 July 2025 12:11


Tony writes:

``` [Hours] [Activity] 2025/05/01 Thursday 0.17 #23232 minor fixes to PR 1.32 #4106 cleanup, perldelta push for CI 1.48 #23225 more review 1.37 #23225 more review, thought I found an issue, testing, but

couldn’t reproduce

4.34

2025/05/05 Monday 0.72 #23242 review, testing, nothing more to say

0.95 #23244 review, testing and approve

1.67

2025/05/06 Tuesday 0.32 #22040 testing and comment 0.88 github workflow discussion, win32 performance 0.45 more github workflow, email to list 0.38 #23202 read through, comment 0.98 #4106 rebase, basic testing, open PR 23262

0.60 some basic Win32 profiling

3.61

2025/05/07 Wednesday 0.12 #4106 fix minor issue 0.22 #23259 review, testing and comment 1.22 #23263 review and approve 0.05 #23264 review and agree (thumbs up) existing comment 0.37 #23255 review, research and approve with comment 0.32 #23234 review, consider API question and approve 0.47 #23254 review, comment 0.13 #23253 review, others have pointed out problems (subscribe to PR) 0.28 #23251 review, testing and comment 0.23 #22125 rebase, basic testing, make PR 23265

2.07 #23225 more review

5.48

2025/05/08 Thursday 0.22 #23254 review updates and approve 0.10 #23259 review updates and approve 1.52 #23202 review updates, comment 0.58 #22854 research 2.10 #22854 look for stuff to document here, but it seems to

mostly be well covered in some form or another.

4.52

2025/05/12 Monday 0.42 github notifications 2.57 #22883 research, debugging, testing, long comment on #22907 0.37 #22854 minor changes, testing push and make PR 23274

0.60 #23272 try to work up a fix, comment

3.96

2025/05/13 Tuesday 0.33 #23225 follow-up 1.08 #23272 write some text and make PR 23276 0.25 #23275 review and comment 0.58 #23225 more review

1.23 #23225 more review

3.47

2025/05/14 Wednesday 0.12 #23275 comment 0.50 #23274 minor edit and follow-up 0.32 #23276 minor edit 0.08 #23279 review and approve 0.08 #23279 review and approve 1.43 #23225 more review

3.18 #23037 research, testing, comments

5.71

2025/05/15 Thursday 0.33 #23287 review and approve 0.68 #23282 update feature.pm and make PR 23288 (run into some github strangeness too) 1.40 #23225 more review 0.55 #23282 comment on #23288 0.40 #23261 comment

1.10 #23225 more review

4.46

2025/05/19 Monday 0.30 #23282 re-work docs 0.72 #23304 comment on rt.cpan ticket 0.18 #23282 more re-work docs 0.23 #23297 review and approve 0.35 #23298 review and approve 0.08 #23299 review and approve 0.25 #23301 review, checks and comment 0.08 #23302 review and approve 0.08 #23303 review and approve

1.65 #23225 more review

3.92

2025/05/20 Tuesday 0.48 #23301 review updates, testing and comment 0.23 #23307 testing 0.08 #23305 review and approve

1.52 #23225 more review

2.31

2025/05/21 Wednesday 2.92 #23225 more review, comments 1.52 #23310 debugging, fix and make PR 23312 and make issue 23313

0.53 #22883 make a PR for the perlio approach, PR 23314

4.97

2025/05/22 Thursday 1.48 #22883 fixes to PR, thinking and comment, on 23314, work on rebasing the 22987 PR 0.17 fix badly merged cygwin perldelta note PR 23316 1.63 #23225 final? pass over the complete changed files

1.65 #23225 final? pass continued and finished

4.93

Which I calculate is 53.35 hours.

Approximately 37 tickets were reviewed or worked on. ```

Perl Developer, New Jersey (TEKsystems)

Perl Jobs

Published on Monday 14 July 2025 00:00

- Mostly remote, but need to be able to go onsite in Milddetown, NJ occasionally.
- Should be a Perl expert
- Should help the team to ensure a clean Perl environment in non-prod and production environment
- Strong scripting knowledge
- Strong in linux commands
- Hands on experience in implementing File Integrity Monitoring (FIM) process and sound knowledge of FIM process
- Should support with deployment
- Apache server
- Strong SQL knowledge
- Strong PostgresSQL
- Azure
- Work with ops to onboard clients and configure the new client in upstart
- Work with clients to resolve their issues
- Should have strong written and oral communications
- Should be able to independently create response to the audit questions
- Review any process or code with internal and external audit team
- Good to have enterprise Identity Lifecycle Management experience

The examples used here are from the weekly challenge problem statement and demonstrate the working solution.

Part 1: Counter Integers

You are given a string containing only lower case English letters and digits. Write a script to replace every non-digit character with a space and then return all the distinct integers left.

The code can be contained in a single file which has the following structure. The different code sections are explained in detail later.

"ch-1.pl" 1


use GD;
use JSON;
use OCR::OcrSpace;
write text to image 3
ocr image 4
main 5

We don’t really need to do the replacement with spaces since we could just use a regex to get the numbers or even just iterate over the string character by character. Still though, in the spirit of fun we’ll do it anyway.

replace all non-digit characters with a space 2 ⟩≡


$s =~ tr/a-z/ /;

Fragment referenced in 4.

Uses: $s 4.

Ok, sure, now we have a string with spaces and numbers. Now we have to use a regex (maybe with split, or maybe not) or loop over the string anyway to get the numbers. But we could have just done that from the beginning!Well, let’s force ourselves to do something which makes use of our converted string. We are going to write the new string with spaces and numbers to a PNG image file. Later we are going to OCR the results.

The image will be 500x500 and be black text on a white background for ease of character recognition. This fixed size is fine for the examples, more complex examples would require dynamic sizing of the image. The font choice is somewhat arbitrary, although intuitively a fixed width font like Courier should be easier to OCR.

The file paths used here are for my system, MacOS 15.4.

write text to image 3 ⟩≡


sub write_image{
my($s) = @_;
my $width = 500;
my $height = 500;
my $image_file = q#/tmp/output_image.png#;
my $image = GD::Image->new($width, $height);
my $white = $image->colorAllocate(255, 255, 255);
my $black = $image->colorAllocate(0, 0, 0);
$image->filledRectangle(0, 0, $width - 1, $height - 1, $white);
my $font_path = q#/System/Library/Fonts/Courier.ttc#;
my $font_size = 14;
$image->stringFT($black, $font_path, $font_size, 0, 10, 50, $s);
open TEMP, q/>/, qq/$image_file/;
binmode TEMP;
print TEMP $image->png;
close TEMP;
return $image_file;
}

Fragment referenced in 1.

Uses: $s 4.

This second subroutine will handle the OCRing of the image. It’ll also be the main subroutine we call which produces the final result.

After experimenting with tesseract and other open source OCR options it seemed far easier to make use of a hosted service. OCR::OcrSpace is a module ready made for interacting with OcrSpace, an OCR solution provider that offers a free tier of service suitable for our needs. Registration is required in order to obtain an API key.

ocr image 4 ⟩≡


sub counter_integers{
my($s) = @_;
my @numbers;
replace all non-digit characters with a space 2
my $image = write_image($s);
my $ocrspace = OCR::OcrSpace->new();
my $ocrspace_parameters = { file => qq/$image/,
apikey => q/XXXXXXX/,
filetype => q/PNG/,
scale => q/True/,
isOverlayRequired => q/True/,
OCREngine => 2};
my $result = $ocrspace->get_result($ocrspace_parameters);
$result = decode_json($result);
my $lines = $result->{ParsedResults}[0]
->{TextOverlay}
->{Lines};
for my $line (@{$lines}){
for my $word (@{$line->{Words}}){
push @numbers, $word->{WordText};
}
}
return join q/, /, @numbers;
}

Fragment referenced in 1.

Defines: $s 2, 3.

Just to make sure things work as expected we’ll define a few short tests.

main 5 ⟩≡


MAIN:{
print counter_integers q/the1weekly2challenge2/;
print qq/\n/;
print counter_integers q/go21od1lu5c7k/;
print qq/\n/;
print counter_integers q/4p3e2r1l/;
print qq/\n/;
}

Fragment referenced in 1.

Sample Run
$ perl perl/ch-1.pl 
1, 2, 2 
21, 1, 5, 7 
4, 3, 2, 1
    

Part 2: Nice String

You are given a string made up of lower and upper case English letters only. Write a script to return the longest substring of the give string which is nice. A string is nice if, for every letter of the alphabet that the string contains, it appears both in uppercase and lowercase.

"ch-2.pl" 6


use v5.40;
is_nice 7
nice substring 8
main 9

We’ll do this in two subroutines: one for confirming if a substring is nice, and another for generating substrings.

This subroutine examines each letter and sets a hash value for both upper and lower case versions of the letter as they are seen. We return true if all letters have both an upper and lower case version.

is_nice 7 ⟩≡


sub is_nice{
my ($s) = @_;
my %seen;
for my $c (split //, $s){
if($c =~ m/[a-z]/) {
$seen{$c}{lower} = 1;
}
elsif($c =~ m/[A-Z]/) {
$seen{lc($c)}{upper} = 1;
}
}
for my $c (keys %seen){
return 0 unless exists $seen{$c}{lower} &&
exists $seen{$c}{upper};
}
return 1;
}

Fragment referenced in 6.

Here we just generate all substrings in a nested loop.

nice substring 8 ⟩≡


sub nice_substring{
my ($s) = @_;
my $n = length($s);
my $longest = q//;
for my $i (0 .. $n - 1) {
for my $j ($i + 1 .. $n) {
my $substring = substr($s, $i, $j - $i);
if (is_nice($substring) &&
length($substring) > length($longest)){
$longest = $substring;
}
}
}
return $longest;
}

Fragment referenced in 6.

The main section is just some basic tests.

main 9 ⟩≡


MAIN:{
say nice_substring q/YaaAho/;
say nice_substring q/cC/;
say nice_substring q/A/;
}

Fragment referenced in 6.

Sample Run
$ perl perl/ch-2.pl 
Weekly 
 
abc
    

References

OCR API Service
The Weekly Challenge 329
Generated Code