Falcon LogScale 1.183.1 LTS (2025-05-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
---|---|---|---|---|---|---|---|---|
1.183.1 | LTS | 2025-05-01 | Cloud On-Prem | 2026-05-31 | Yes | 1.150.0 | 1.177.0 | No |
Hide file download links
Download
Use docker pull humio/humio-core:1.183.1 to download the latest version
Hide file hashes
These notes include entries from the following previous releases: 1.183.0, 1.182.0, 1.181.0, 1.180.0, 1.179.0, 1.178.0
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Automation and Alerts
Important Notice: Downgrade Considerations
Enhancements to Aggregate alerts in version 1.176 include additional state tracking for errors and warnings. While this is an improvement, it does require attention if you need to downgrade to an earlier version.
Potential Impact:
If you downgrade from 1.176 or above to 1.175 or below, you may encounter errors related to Aggregate Alerts, causing Aggregate Alerts to not run to completion.
Resolution Steps:
After downgrading, if you encounter errors containing Error message and error in phase must either both be set or not set, do the following:
Identify affected Aggregate Alerts by executing the following GraphQL query:
graphqlquery q1 { searchDomains { name aggregateAlerts {id, lastError, lastWarnings} } }
Document the IDs of any affected alerts having warnings and no errors set.
Apply the resolution – for each identified alert with warnings (optionally and/or errors), apply this GraphQL mutation, replacing
INSERT
with your actual view name and alert ID:graphqlmutation m1 { clearErrorOnAggregateAlert(input:{viewName:"INSERT",id:"INSERT"}) {id} }
Keep track of modified alert IDs for future reference.
Verify the resolution – confirm that the system returns to normal operation, and monitor for any additional error messages using a LogScale query and/or alert, such as:
logscale#kind=logs class="c.h.c.Context" "Error message and error in phase must either both be set or not set"
These steps will reset the Aggregate Alerts and restore the system to normal operation.
Removed
Items that have been removed as of this release.
GraphQL API
The following items have been removed:
assetType field on the types
Alert
,Dashboard
,ViewInteraction
, andSavedQuery
.GraphQL mutations createAlertFromTemplate, createAlertFromPackageTemplate, createScheduledSearchFromTemplate, and createScheduledSearchFromPackageTemplate.
Enumeration value
ChangeTriggersAndActions
from the enumeration typesPermission
andViewAction
.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
color
field on theRole
type has been marked as deprecated (will be removed in version 1.195).The
storage
task of the GraphQLNodeTaskEnum
is deprecated and scheduled to be removed in version 1.185. This affects the following items:
The
supportedTasks
field of theClusterNode
type.The
assignedTasks
field of theClusterNode
type.The
unassignedTasks
field of theClusterNode
type.The assignTasks() mutation.
The unassignTasks() mutation
The
INITIAL_DISABLED_NODE_TASKS
configuration variable.LogScale is deprecating free-text searches that occur after the first aggregate function in a query. These searches likely did not and will not work as expected. Starting with version 1.189.0, this functionality will no longer be available. A free-text search after the first aggregate function refers to any text filter that is not specific to a field and appears after the query's first aggregate function. For example, this syntax is deprecated:
logscale Syntax"Lorem ipsum dolor" | tail(200) | "sit amet, consectetur"
Some uses of the
wildcard()
function, particularly those that do not specify afield
argument are also free-text-searches and therefore are deprecated as well. Regex literals that are not particular to a field, for example/(abra|kadabra)/
are also free-text-searches and are thus also deprecated after the first aggregate function.To work around this issue, you can:
Move the free-text search in front of the first aggregate function.
Search specifically in the @rawstring field.
If you know the field that contains the value you're searching for, it's best to search that particular field. The field may have been added by either the log shipper or the parser, and the information might not appear in the @rawstring field.
Free-text searches before the first aggregate function continue to work as expected since they are not deprecated. Field-specific text searches work as expected as well: for example,
myField=/(abra|kadabra)/
continue to work also after the first aggregate function.The use of the event functions
eventInternals()
,eventFieldCount()
, andeventSize()
after the first aggregate function is deprecated. For example:Invalid Example for Demonstration - DO NOT USElogscaleeventSize() | tail(200) | eventInternals()
Usage of these functions after the first aggregate function is deprecated because they work on the original events, which are not available after the first aggregate function.
Using these functions after the first aggregate function will be made unavailable in version 1.189.0 and onwards.
These functions will continue to work before the first aggregate function, for example:
logscaleeventSize() | tail(200)
The setConsideredAliveUntil and
setConsideredAliveFor
GraphQL mutations are deprecated and will be removed in 1.195.The
lastScheduledSearch
field from theScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The newlastExecuted
andlastTriggered
fields have been added to theScheduledSearch
datatype to replacelastScheduledSearch
.The
EXTRA_KAFKA_CONFIGS_FILE
configuration variable has been deprecated and planned to be removed no earlier than version 1.225.0. For more information, see RN Issue.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Configuration
KAFKA_MANAGED_BY_HUMIO
has an updated behavior. When the default settingKAFKA_MANAGED_BY_HUMIO=true
is applied, LogScale will:
Set up new topics (ingest queue, global and chatter) using default settings
No longer modify existing topic configurations to conform to these defaults
Key benefits of this change:
Clearer separation of responsibilities between LogScale and Kafka
Topic settings management should be handled via Kafka admin scripts, not by editing LogScale settings.
Important
Ensure Kafka cluster is fully operational and verify the required number of brokers in the Kafka cluster: LogScale will fail to start if topic creation is needed and the available Kafka brokers are less than the number necessary to hit the configured replication factor.
The following configuration options now only apply on initial creation of a topic:
TOPIC_MAX_MESSAGE_BYTES
(max.message.bytes for the chatter and ingest-queue topics)Customers wishing to customize these can do so via the scripts shipping with their Kafka install, for example
kafka/bin/kafka-configs.sh
.The following configuration variables are now available, and are only applied when a topic is being initially created:
CHATTER_INITIAL_REPLICATION_FACTOR
(default is3
)
INGEST_QUEUE_INITIAL_REPLICATION_FACTOR
(default is3
)
GLOBAL_INITIAL_REPLICATION_FACTOR
(default is3
)The
min.insync.replicas
setting will be initially set to the replication factor minus 1, to allow for the loss of one replica.You can customize these settings via the scripts shipping with your Kafka install, for example
kafka/bin/kafka-topics.sh
andkafka/bin/kafka-configs.sh
.The default initial replication factor for the ingest queue is now
3
, it was previously2
.The semantics of
S3_STORAGE_PREFERRED_COPY_SOURCE
variable has been adjusted so that LogScale now attempts to fetch from local nodes first, and if that fails, it will try bucket storage. Previously, LogScale would try fetching from both local nodes and bucket storage in parallel. The new behavior should reduce the number of fetches from bucket storage on clusters configured this way.Ingestion
For Self-Hosted customers only. Event Forwarding no longer forwards events with tag grouping and auto sharding applied. This means that tag grouped fields are now forwarded with their actual value instead of hashed value. The
#humioAutoShard
tag is not forwarded either.For more information, see Event Forwarding Rules.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The bundled JDK has been upgraded to version 24.0.1.
The bundled JDK version is now being upgraded to Java 24.
Due to a version update of the bundled timezone database in the JDK, users may encounter slight changes in LogScale query behavior: previously, the EST, HST and MST time zone identifiers were linked to the -05:00, -07:00 and -10:00 time zone offsets. They are now linked to America/Panama, America/Honolulu and America/Phoenix. While this doesn't change the time zone behavior itself, it may impact queries that look for time zone identifier strings. For example, the query:
logscalecreateEvents("a=b") | kvParse() | formatTime(format="%Z", as="timezone", timezone=EST) | select(timezone)
would return
-05:00
on earlier releases. As of this release, it will returnAmerica/Panama
.Due to a version update of the bundled CLDR locale data in the JDK, users may encounter slight changes in LogScale query behavior:
First day of week is Monday in UAE, see CLDR-15697
Default numbering system for Arabic in non-Arabic-speaking locations, see CLDR-17553
Comma is added for some date formatting patterns, see CLDR-17812
Some time zone names changed due to them becoming links to other zones, see CLDR-17960
Due to a version update of the bundled Unicode version in the JDK, users may encounter slight changes in LogScale query behavior:
Sorting on strings and case-insensitive regex matching may change to be more correct now if your data contains certain Unicode characters.
This Java release upgrades the Unicode version to 16.0, which includes updated versions of the Unicode Character Database and Unicode Standard Annexes #9, #15, and #29.
Unicode has added 5,185 new characters, for a total of 154,998 characters. The new additions include seven new scripts:
Garay is a modern-use script from West Africa.
Gurung Khema, Kirat Rai, Ol Onal and Sunuwar are four modern-use scripts from Northeast India and Nepal.
Todhri is an historic script used for Albanian.
Tulu-Tigalari is an historic script from Southwest India.
For more details about Unicode 16.0, refer to Unicode Consortium's release note.
Administration and Management
The minimum version to which LogScale can be downgraded to is now 1.177.0 (it was 1.157.0).
New features and improvements
Installation and Deployment
A revised query coordination assignment is now enabled by default, which improves resiliency in cases of Cluster topology changes.
Administration and Management
The new losable-node-count-before-storage-over-capacity metric of type Gauge is now available, labelled by zone. For each zone, this metric indicates the number of nodes that a zone can lose before going over capacity in terms of primary disk storage, taking into account the value of the
PRIMARY_STORAGE_MAX_FILL_PERCENTAGE
environment variable.Updated how the losable-node-count-before-storage-over-capacity gauge metric is calculated, to consider secondary storage. When secondary storage is present, it takes precedence over primary storage to calculate the available storage capacity, taking into account the
SECONDARY_STORAGE_MAX_FILL_PERCENTAGE
configuration settings.The new metric query-worker-queue-full is now available. This metric tracks the number of times a worker queue was full and a new query submission was rejected as a result.
LogScale has a new internal metric external-ingest-delay to help identifying upstream issues. The metric tracks the delay between an event being recorded and it being processed by LogScale, keyed by repository.
User Interface
The Lookup Files preview feature in the User Interface now displays a maximum of 500 rows, reduced from 50,000. The lower limit improves the UI performance and matches the existing row limit for Ad-hoc tables previews.
For Self-hosted customers only. Shared files are now integrated into the file list, appearing alongside repository-scoped and view-scoped files. This unified display provides better visibility and easier access to all available files.
Note
This change has been reverted in version 1.183 due to identified issues. The feature will be reintroduced in an upcoming release.
A UI warning now informs whenever a query is being stopped due to internal issues.
Bucket size information is now displayed for
Single Value
in the widget header on the dashboard. For example, when atimeChart()
query function is used.
Automation and Alerts
Added more fields to some of the logs for Filter alerts in the humio-activity repository.
Logs for Alert and Scheduled search queries now contain the id and name of the alert (
alertId
andalertName
) or scheduled search (scheduledSearchId
andscheduledSearchName
).The
Triggers
overview now displays new columns:- Description
- Last error
- Next planned execution (only for scheduled searches)
Storage
For all bucket uploads to S3 using the AWS SDK, LogScale now uses an If-None-Match header. This technique will prevent overwrites on files that already exist in the bucket. If necessary, you can turn off this overwrite protection feature by setting the
S3_STORAGE_DISABLE_UPLOAD_DUPLICATE_CHECK
configuration variable totrue
. Additionally, LogScale will now properly terminate multipart uploads that fail when using the AWS SDK.For more information, see Amazon Bucket Storage Parameters,
S3_STORAGE_DISABLE_UPLOAD_DUPLICATE_CHECK
.These two features are now enabled by default:
DigestersDontNeedMergeTargetMinis
SegmentRebalancerHandlesMinis
This configuration ensures faster digest reassignment by reducing the number of mini segments fetched by LogScale.
LogScale will now crash if the target bucket for writing is marked
readOnly
while the cluster is running.LogScale now uses a fixed TCP receive buffer size for ingest consumers, which defaults to 32 MB. This change replaces the previous automatic buffer size calculation, which did not perform reliably. The operating system supplies buffer limits based on the value defined in the
/proc/sys/net/core/rmem_max
file. System administrators must modify this file to enable larger buffer sizes.To specify a different buffer size than the default, use the
KAFKA_INGEST_QUEUE_CONSUMER_
variable as the prefix to pass these consumer configuration properties:
GraphQL API
The maximum amount of errors returned in the
errors
field for a GraphQL error is now capped at 100. For example:JSON{ "errors": [ { "message": "Unexpected token 'T', \"The reques\"... is not valid JSON", "stack": "SyntaxError: Unexpected token 'T', \"The reques\"... is not valid JSON" } ] }
Any queries that result in a larger amount of errors than allowed will return 400 Bad Request with a single error stating that the maximum error limit was exceeded.
Setting short-term stability on the following output fields available on the testParserV2 GraphQL mutation:
falselyTaggedFields
arraysWithGaps (and all subfields)
schemaViolations (and all subfields)
These fields were previously only available in preview form.
Enabled the
ReplacePeriodicIngestOffsetPushing
feature flag by default, which reduces the load on global from updates to datasourceingestOffsets
.
API
Introduced two new API extensions for the Query Jobs API:
Export API. Enables exporting query results in multiple formats:
CSV
JSON
NDJSON
Plain-text
Pagination API:
Enables result pagination instead of receiving complete results per poll
Supports sorting results by specified fields/columns
Helps protect query clients from large result sets.
For more information, see Export API, Pagination API.
The new
update-uploaded-files-storage-target
subpath is now available for the bucket-storage-target endpoint. The endpoint has aPOST
form and uses the same arguments as update-segments-storage-target. TheGET /api/v1/bucket-storage-target
endpoint is being updated to include sharedReplicableFiles
in the uploaded files count.
Configuration
The new environment variable
FEDERATED_SUBMISSION_TIMEOUT_MILLIS
is now available. It is used to set a timeout for multi-cluster query submissions.For more information, see
FEDERATED_SUBMISSION_TIMEOUT_MILLIS
.LogScale introduces the new configuration variable
PDF_RENDER_SERVICE_CALLBACK_BASE_URL
, which can be used to control the callback URL sent to the PDF Render Service used by the Scheduled PDF Reports feature. The default behaviour is to use thePUBLIC_URL
variable for the callback URL sent to the render service, but in some deployment scenarios it is beneficial to keep the request traffic internal to the cluster where LogScale is hosted, instead of using the public facing URL for the requests. This is where this new variable can be used. If LogScale is deployed in multi-organization mode, the callback URL goes through the same rules of formatting, as described for thePUBLIC_URL
variable. IfPDF_RENDER_SERVICE_CALLBACK_BASE_URL
is not configured, thenPUBLIC_URL
is used.For more information, see
PDF_RENDER_SERVICE_CALLBACK_BASE_URL
, Adding PDF Render to LogScale Configuration.The new default value for
MAX_EVENT_FIELD_COUNT
is 8,000. Previously, it was 1,000.The new default value for
MAX_EVENT_FIELD_COUNT_IN_PARSER
is 200,000. Previously, it was 50,000.
Dashboards and Widgets
The Label setting for dashboard parameters has now an increased limit of 200 characters.
Ingestion
In the parser editor, fields on a test case output that contain both numbers and letters will now take numbers into account, sorting numbers numerically rather than lexicographically. For example, the fields myArray[1], myArray[10], and myArray[2] will now be ordered as myArray[1], myArray[2], and myArray[10].
Queries
Added an optional field statistics computation to the query result. This computation finds all the fields of the result and the 10 most common values for each field. This is the same information that powers the fields panel of the LogScale UI.
This computation must be enabled on a per-query basis, which can be done by adding the field
true
to the query input.Multi-Cluster Search can now estimate and report coordinator memory usage. This feature ensures that multi-cluster searches block queries that exceed system memory thresholds.
Searches on @id now target the specific segment when possible. Support for
OR
is also now available: when theOR
operator is used on @id, you can find a few selected events using this operator efficiently. This optimization only applies to@id OR
conditions — not ifOR
mentions criteria other than @id.
Functions
It is now possible to specify a
limit=max
argument insort()
andtable()
functions. The maximum limit is defined by theStateRowLimit
dynamic configuration, which currently defaults to 20,000.From this release, LogScale increases the limits for the functions:
sort()
,table()
,tail()
, andhead()
. These functions can now return up to 50,000 rows (previously 20,000). The maximum row limit is planned to be increased in upcoming releases. You can use thelimit=max
argument in your queries to always utilize the current settings. Notes:Queries are limited to a 1GB state size. If this limit is reached, functions may return fewer rows.
The default value for the
limit
parameter is 200. This limit will be increased forsort()
andtail()
functions in upcoming releases.limit=max
syntax is currently not supported in Multi-Cluster Search setups. LogScale will support it starting from version 1.189.For Self-Hosted environments, the new maximum limit set through the
StateRowLimit
dynamic configuration is controlled by the feature flagSortNewDatastructure
. Removal of this feature flag to make its effects standard behavior is expected by version 1.189.
Increased the maximum limit for
sort()
,table()
,head()
, andtail()
from 50,000 to 100,000.Increased the maximum limit for the
sort()
,head()
,tail()
, andtable()
functions from100,000
to200,000
. This also increases the value ofQueryResultRowCountLimit
dynamic configuration to 200,000.
Fixed in this release
Security
Fixed an issue where it was not possible to edit an OIDC identity provider if it was configured as the default identity provider for the organization.
OAuth login failed when following certain links to LogScale queries, due to LogScale not being able to decode the OAuth state value.
In rare cases, references would not be cleaned up properly when deleting a role. Any further attempts to remove these references would fail. This issue has now been fixed.
Administration and Management
Fix a bug in clusters where a file may be deleted once it is downloading from bucket storage if the confirmation message of currenthost can not write to global storage.
Inaccuracy issues have been fixed for the ingest-offset-lowest metric.
User Interface
When configuring field aliasing and importing a field alias schema from a YAML file, the Original field name and Alias to fields were being swapped. The example below of a YAML file would cause
myOriginalField
to become the alias, andmyAliasField
to become the original field:yaml$schema: https://schemas.humio.com/dataschema/v0.1.0 aliasMapping: - aliases: myOriginalField: myAliasField fieldsToKeep: [] name: someAliasMappingName tags: '#someTag': someTagValue fields: [] name: mySchema
Intended setup: myOriginalField → myAliasField
Bug result: myAliasField → myOriginalField
For those who used the import feature:
Review current alias mappings in your schema
Check if fields are reversed from your intended configuration
If reversed, manually swap the fields back to correct order.
When trying to get a token via the UI, the display of the token would close before users could copy it. This issue has now been fixed.
The
menu item is now correctly disabled when the user lacks deletion permissions.
Automation and Alerts
Aggregate alerts no longer warn about ingest delay when the delay is not relevant for the aggregate alert.
Fix a bug that could prevent all aggregate and filter alerts from running.
Aggregate alerts have been fixed for an issue where they could fail and restart while starting the alert query.
Previously, a fatal error in handling an Alert or Scheduled search could result in other Alerts or Scheduled searches failing to run. This issue has now been fixed.
Storage
Fix an issue where LogScale might not correct segment over-replication by removing replicas.
Fixed an issue where accidental over-replication could remain unhandled until the rebalancing job triggers.
Fixed an issue where a node might crash during the digest phase due to incorrect state tracking.
Global offset was not updating during patch updates, which could create duplicate global snapshot names with different contents. This issue has now been fixed so that all global updates will now update the offset and should no longer produce the same snapshot name twice.
Before this fix, the system incorrectly removed all
currentHosts
from segments during bucket storage upload when:NoCurrentsForBucketSegments
was enabledS3_STORAGE_PREFERRED_COPY_SOURCE
variable was disabled.
Now
currentHosts
are only removed from segments when both settings are enabled.Digest would fail to start in rare cases until the node was manually rebooted.
Debug log lines could be missing if LogScale crashed during the boot sequence.
If a bucket was previously used as the source for disaster recovery via
S3_RECOVER_FROM_KMS_KEY_ARN
configuration, and the cluster configuration was updated to use that bucket again as theS3_STORAGE_BUCKET
, the global state of the bucket was not correctly updated, causing LogScale to upload the same files repeatedly into the bucket in an attempt to perform disaster recovery. This issue has been fixed.Fixed an issue where operations during repository deletion could trigger incorrect
New dataspace is not empty
log messages.
Dashboards and Widgets
Fix an issue where the event distribution chart would be hidden by default if a repository was configured with automatic search disabled.
The
Bar Chart
widget has been fixed as bars would not always react to hover and click events.Fixed an issue in the
Bar Chart
widget where the series would not be found automatically even with the fields present in the query result.The
Table
widget has been fixed as it would display an empty page on a dashboard when applying a parameter or dashboard filter.
Queries
Fixed an issue where if a query was restarted it might, in some cases, be removed completely before it could be polled for results, leading to a 404 error on query poll.
In case of network failures, even transient failures, occurring during polling or other operations in Multi-Cluster Search, the message could not be correctly serialized, thus leading to query failure. This fixes some cases where a multi-cluster query (using for example
defineTable()
) might now work and return results with a warning, where it previously failed.Simplifications around Query Coordination for cluster queries have been made internally to fix an issue which, in rare cases, could lead to a query that is handed over without a coordinator.
Query state issues during query restarts have been addressed to resolve or reduce these behaviors:
Queries returning a 404 error during restart operations
Queries displaying an incorrect stopped status
The User Interface would show Query status: Done even for queries whose completion rate was less than 100%. This issue has now been fixed.
Fixed an issue where in rare cases a static query might terminate early and include incomplete results.
Fixed an issue where the execution time of a static query did not include the result phase of the execution.
Fixed an issue where a query could be incorrectly started from a cached state, which would lead to its failure. This specifically happened for static queries which ended in the past, for example
end != "now"
.Regexes in queries using the LogScale Regular Expression Engine V2 would give unreadable diagnostics. For example, before the fix the regex
/\c/F
would produce the diagnostic message: Couldn't compile regex. Problems: List((EscapeSequenceIncomplete(),0,2)). The issue has been fixed so that the same regex now correctly produces the diagnostic message: Escape sequence incomplete. This escape sequence (starting with a '\') is not syntactically valid. It is perhaps incomplete, or not intended to be an escape sequence.Queries with subqueries have been fixed in cases where they would not correctly report their max and latest state size.
Improving error handling for remote responses in LogScale Multi-Cluster Search, by fixing incorrect error reporting that was masking underlying issues.
Events could be missed if a live query was run based on @ingesttimestamp but @timestamp was outside the time window of the query. This would affect all Filter alerts as well as Aggregate alerts running on @ingesttimestamp.
Transferring tables between cluster nodes (either defined using
defineTable()
or from Lookup Files) could lead to thread starvation and node crashes. This issue has now been fixed.
Fleet Management
Fixed an issue in Fleet Management where older patch versions of Falcon LogScale Collector were unavailable, causing 404 errors when attempting to upgrade/downgrade to specific versions.
Functions
Using fields that were not in the original event in the
where
clause would fail forselfJoin()
andselfJoinFilter()
functions when theprefilter
parameter is set totrue
.Before this fix,
array:eval()
andobjectArray:eval()
might cause an internal error or return incorrect/garbled data, depending on the internal representation of the event they were working on.
Other
An
LsCpuJob
issue showing up in logs is being fixed, as it incorrectly assumed that the string output from the lscpu shell command could not contain colons.Some occurrences of duplicate stop words have been removed from the backend. For example,
on on
corrected toon
in some error message.
Improvement
Administration and Management
A new labelled metric was created to track how often the S3AsyncClient has to retry API calls for S3 bucket operations.
The metric full name as reported by LogScale is s3-aws-retry-count/{operationName} where
operationName
is the S3 API call that was attempted, such as,PutObject
orGetObject
.
User Interface
Events List
andTable
widgets now load large query results faster through the new Pagination API implementation.During group creation, LogScale incorrectly displayed a You have no roles yet message, despite roles always being present after the creation was completed.
This behavior is now being removed internally from the group creation workflow, as this state can't occur in the system (organizations always have at least one default role).
This change now improves the system as follows:
Improved workflow consistency. The roles list now appears correctly in the UI during group creation
Enhanced internal system stability
Users can still create new roles through the existing roles list view.
LogScale now provides enhanced accessibility for disabled icon buttons. Users can understand why an icon is unavailable through clear feedback from both tooltip and screen reader announcements. This improvement makes the interface more inclusive for keyboard navigation and screen reader users.
The
UI menu is now hidden on those views where it is not applicable, instead of showing as disabled.
Automation and Alerts
Added more fields to some of the logs for aggregate alerts in the humio-activity repository.
Scheduled searches can now also run on the @ingesttimestamp. A configurable Max wait time property on scheduled searches that runs on @ingesttimestamp is used to catch events that are delayed in the ingestion pipeline, or to wait for query warnings about missing data and errors. The @ingesttimestamp is the default timestamp set on all new scheduled searches.
With this change, the GraphQL mutations createScheduledSearch and updateScheduledSearch have been deprecated for removal in 1.231 and createScheduledSearchV2 and updateScheduledSearchV2 will replace them.
For more information about scheduled searches and their timestamps, see Ingest delay for scheduled searches. For information about the Max wait time property, see Max wait time.
A link is now being added to open the query on the
Search
page when the trigger is in read-only state on theTriggers
overview.
Storage
Jobs and metrics which were specific to S3 Archiving have been renamed to generic archiving to make them more provider agnostic. For example,
S3ArchivingSchema
is nowArchivingSchema
.LogScale nodes will now delay moving segments away from gracefully terminated nodes, to avoid moving segments unnecessarily for ordinary reboots. The default delay is 5 minutes. Nodes being removed long-term from the cluster should be evicted first, which will disable this delay. It is possible to adjust the delay using the
GracefulShutdownConsideredAliveSeconds
dynamic configuration.
Dashboards and Widgets
When exporting a dashboard as a template file, the field queryString for a interaction and the field urlTemplate for a interaction no longer require minimum lengths and can be empty.
The Link option for formatting columns in the
Table
widget now allows for opening links in a new tab.Dashboards now load query results faster due to optimized field statistics calculations.
Dashboards now benefit from enhanced field statistics computation. This optimization ensures better dashboard performances while processing query results.
Queries
The
readFile()
function now takes an array of table/file names in itsfile
parameter (or its alias parametertable
). If multiple file or table names are given, they will be output in order.If a file/table does not have a column requested in the
include
parameter, the error message forreadFile()
now indicates which file(s) do not have the specified colomn(s).The Query Coordinator now accurately tracks client polls' frequency. This improvement prevents unnecessary polling operations in those cases where clients do not poll the query as frequently as allowed, as with alerts for example.
Improving error handling when submitting queries: if an invalid query is submitted, submission is not retried internally anymore. This in particular improves error reporting for Alerts and Scheduled searches.
Queries run on behalf of an organization are now logged to the humio-audit and humio-activity repositories like other queries.
A performance improvement has been implemented for queries combining multiple text searches with different tag filters, through reduced data scan volume. Example query:
logscale#event=ConnectIP4 OR (#event=ReceiveAcceptIP4 AND RemoteAddressIP4=12.34.56.78)