pyspark.pandas.Index.is_monotonic_decreasing#

property Index.is_monotonic_decreasing#

Return boolean if values in the object are monotonically decreasing.

Note

the current implementation of is_monotonic_decreasing requires to shuffle and aggregate multiple times to check the order locally and globally, which is potentially expensive. In case of multi-index, all data is transferred to a single node which can easily cause out-of-memory errors.

Note

Disable the Spark config spark.sql.optimizer.nestedSchemaPruning.enabled for multi-index if you’re using pandas-on-Spark < 1.7.0 with PySpark 3.1.1.

Returns
is_monotonicbool

Examples

>>> ser = ps.Series(['4/1/2018', '3/1/2018', '1/1/2018'])
>>> ser.is_monotonic_decreasing
True
>>> df = ps.DataFrame({'dates': [None, '3/1/2018', '2/1/2018', '1/1/2018']})
>>> df.dates.is_monotonic_decreasing
False
>>> df.index.is_monotonic_decreasing
False
>>> ser = ps.Series([1])
>>> ser.is_monotonic_decreasing
True
>>> ser = ps.Series([])
>>> ser.is_monotonic_decreasing
True
>>> ser.rename("a").to_frame().set_index("a").index.is_monotonic_decreasing
True
>>> ser = ps.Series([5, 4, 3, 2, 1], index=[1, 2, 3, 4, 5])
>>> ser.is_monotonic_decreasing
True
>>> ser.index.is_monotonic_decreasing
False

Support for MultiIndex

>>> midx = ps.MultiIndex.from_tuples(
... [('x', 'a'), ('x', 'b'), ('y', 'c'), ('y', 'd'), ('z', 'e')])
>>> midx  
MultiIndex([('x', 'a'),
            ('x', 'b'),
            ('y', 'c'),
            ('y', 'd'),
            ('z', 'e')],
           )
>>> midx.is_monotonic_decreasing
False
>>> midx = ps.MultiIndex.from_tuples(
... [('z', 'e'), ('z', 'd'), ('y', 'c'), ('y', 'b'), ('x', 'a')])
>>> midx  
MultiIndex([('z', 'a'),
            ('z', 'b'),
            ('y', 'c'),
            ('y', 'd'),
            ('x', 'e')],
           )
>>> midx.is_monotonic_decreasing
True