In certain circumstances, users running a Serendipity Blackmagic / Veripress server with one or more additional Cluster Nodes may encounter a job Imaging Failure caused by a duplicate job ID.
Duplicate job IDs only occur on rare occasions, caused by a sudden or incomplete shutdown, or disconnection of the Blackmagic / Veripress server from the Cluster Node while jobs are being processed on the Node. These duplicate IDs will prevent Imaging for a subsequent job bearing the same job ID.
The Server Log will show error lines similar to the following:
server Imaging time: 00:00:05 for job 1004521 Daily Publication page 2 185209664 1551177497 ssjob.c SSJobAddToQueue Duplicate jobid 2118971 encountered 185209664 1551177497 render.c:1020 ImageJob Cannot add job 1004521 Daily Publication page 2 to queue 185209664 1551177497 slaveports.c ImageQueueManager Job 1004521 Daily Publication page 2 failed to image 185209664 1551177504
SOLUTION
Restart the Cluster Node
In some cases the issue can be resolved by a restart of the Cluster Node server:
- Allow all jobs in the QueueManager to finish processing.
- Delete any Failed jobs from the QueueManager.
- Shut down the Cluster Node server(s).
- Stop (or shut down), then restart the main Blackmagic / Veripress server.
- Restart the Cluster Node server(s).
- Re-submit the failed jobs.
If the jobs Image and Render, the problem should be resolved.
If the jobs fail to Image again, then…
Remove and rebuild the Cluster Node job database
Delete and allow the Cluster Node(s) to recreate its job database:
- Allow all jobs in the QueueManager to finish processing.
- Delete any Failed jobs from the QueueManager.
- Shut down the Cluster Node server(s).
- Stop (or shut down), then restart the main Blackmagic / Veripress server.
- On the Cluster Node machine(s)
- Navigate to …/Serendipity/Serendipity Blackmagic/lib/ folder
- Or, navigate to the …/Serendipity/Veripress/lib/ folder
- Delete the defaultss.dbd folder and the defaultss.dbh file.
- Restart Cluster Node server(s). A new defaultss.dbd folder and defaultss.dbh file will be created.
- Re-submit the failed jobs.
Your workflow should now be functioning normally.