package core:sys/android
Warning: This was generated for -target:windows_amd64 and might not represent every target this package supports.
Index
Types (214)
- AAdditionalInfoEvent
- AAsset
- AAssetDir
- AAssetManager
- ABitmapResult
- AChoreographer
- AChoreographerFrameCallbackData
- AChoreographer_frameCallback
- AChoreographer_frameCallback64
- AChoreographer_refreshRateCallback
- AChoreographer_vsyncCallback
- AColor_xy
- AConfiguration
- ADataSpace
- ADynamicSensorEvent
- AFont
- AFontMatcher
- AHardwareBuffer
- AHardwareBuffer_Desc
- AHardwareBuffer_Format
- AHardwareBuffer_Plane
- AHardwareBuffer_Planes
- AHardwareBuffer_UsageFlags
- AHdrMetadataType
- AHdrMetadata_cta861_3
- AHdrMetadata_smpte2086
- AHeadTrackerEvent
- AHeadingEvent
- AHeartRateEvent
- AImageDecoder
- AImageDecoderBlendOp
- AImageDecoderFrameDisposalOp
- AImageDecoderFrameInfo
- AImageDecoderHeaderInfo
- AImageDecoderRepeatCount
- AImageDecoderResult
- AInputEvent
- AInputQueue
- ALimitedAxesImuEvent
- ALimitedAxesImuUncalibratedEvent
- ALooper
- ALooperFdFlags
- ALooperFdFlagsBits
- ALooperPollResult
- ALooper_callbackFunc
- AMetaDataEvent
- AMotionClassification
- ANativeActivity
- ANativeActivityCallbacks
- ANativeActivity_createFunc
- ANativeWindow
- ANativeWindowTransform
- ANativeWindow_Buffer
- ANativeWindow_ChangeFrameRateStrategy
- ANativeWindow_FrameRateCompatibility
- ANativeWindow_LegacyFormat
- ANeuralNetworksBurst
- ANeuralNetworksCompilation
- ANeuralNetworksDevice
- ANeuralNetworksEvent
- ANeuralNetworksExecution
- ANeuralNetworksMemory
- ANeuralNetworksMemoryDesc
- ANeuralNetworksModel
- ANeuralNetworksOperandType
- ANeuralNetworksOperationType
- ANeuralNetworksSymmPerChannelQuantParams
- AObbInfo
- APerformanceHintManager
- APerformanceHintSession
- ARect
- ASensor
- ASensorEvent
- ASensorEventQueue
- ASensorList
- ASensorManager
- ASensorRef
- ASensorVector
- AStorageManager
- AStorageManager_obbCallbackFunc
- ASurfaceControl
- ASurfaceTexture
- ASurfaceTransaction
- ASurfaceTransactionStats
- ASurfaceTransaction_OnBufferRelease
- ASurfaceTransaction_OnCommit
- ASurfaceTransaction_OnComplete
- ASystemFontIterator
- AThermalHeadroomThreshold
- AThermalManager
- AThermalStatus
- AThermal_StatusCallback
- AUncalibratedEvent
- AVsyncId
- AWorkDuration
- AndroidBitmapCompressFormat
- AndroidBitmapFlags
- AndroidBitmapFlagsAlpha
- AndroidBitmapFormat
- AndroidBitmap_CompressWriteFunc
- AppCmd
- AssetFileError
- AssetOpenMode
- BitmapInfo
- DLextFlags
- DLextFlagsBits
- DeviceTypeCode
- DurationCode
- FamilyVariant
- FeatureLevelCode
- FontWeight
- FuseCode
- HideSoftInputFlags
- InputEventType
- InputSource
- InputSourceClass
- InputSourceClassBits
- InputSourceDevice
- InputSourceDeviceBits
- JNIEnv
- JNIInvokeInterface
- JNINativeInterface
- JNINativeMethod
- JavaVM
- KeyBoardType
- KeyEventAction
- KeyEventFlags
- KeyEventFlagsBits
- KeyState
- Keycode
- LogId
- LogMessage
- LogPriority
- MetaKeyState
- MetaKeyStateBits
- MotionEventAction
- MotionEventActionEnum
- MotionEventAxis
- MotionEventButton
- MotionEventEdgeFlags
- MotionEventEdgeFlagsBits
- MotionEventFlags
- MotionEventFlagsBits
- MotionRange
- NNResultCode
- OBBState
- ObbFlags
- OperandCode
- OperationCode
- PaddingCode
- PermissionManagerResult
- PermissionManagerStatus
- PreferenceCode
- PriorityCode
- ResNsendFlags
- ResNsendFlagsBits
- Seek_Whence
- SensorAdditionalInfo
- SensorDirectChannelType
- SensorDirectReportRate
- SensorReportingMode
- SensorStatus
- SensorType
- ShowSoftInputFlags
- SurfaceTransactionTransparency
- SurfaceTransactionVisibility
- ToolType
- WindowFlags
- WindowFlagsBits
- addrinfo
- android_app
- android_dlextinfo
- android_fdsan_error_level
- android_fdsan_owner_type
- android_namespace_t
- android_poll_source
- jarray
- jboolean
- jbooleanArray
- jbyte
- jbyteArray
- jchar
- jcharArray
- jclass
- jdouble
- jdoubleArray
- jfieldID
- jfloat
- jfloatArray
- jint
- jintArray
- jlong
- jlongArray
- jmethodID
- jobject
- jobjectArray
- jobjectRefType
- jshort
- jshortArray
- jsize
- jstring
- jthrowable
- jvalue
- jweak
- net_handle_t
- off64_t
- off_t
- pid_t
- sa_family_t
- sockaddr
- socklen_t
- sync_fence_info
- sync_file_info
- uid_t
Constants (117)
- ALOOPER_PREPARE_ALLOW_NON_CALLBACKS
- ANEURALNETWORKS_BYTE_SIZE_OF_CACHE_TOKEN
- ANEURALNETWORKS_MAX_SIZE_OF_IMMEDIATELY_COPIED_VALUES
- ANY_INPUT_SOURCE
- ASENSOR_DELAY_INVALID
- ASENSOR_FIFO_COUNT_INVALID
- ASENSOR_INVALID
- ASENSOR_MAGNETIC_FIELD_EARTH_MAX
- ASENSOR_MAGNETIC_FIELD_EARTH_MIN
- ASENSOR_RESOLUTION_INVALID
- ASENSOR_STANDARD_GRAVITY
- AllDLextFlags
- COLOR_MODE
- DENSITY
- DENSITY_ANY
- DENSITY_DEFAULT
- DENSITY_HIGH
- DENSITY_LOW
- DENSITY_MEDIUM
- DENSITY_NONE
- DENSITY_TV
- DENSITY_XHIGH
- DENSITY_XXHIGH
- DENSITY_XXXHIGH
- GRAMMATICAL_GENDER
- GRAMMATICAL_GENDER_ANY
- GRAMMATICAL_GENDER_FEMININE
- GRAMMATICAL_GENDER_MASCULINE
- GRAMMATICAL_GENDER_NEUTER
- HDR_ANY
- HDR_NO
- HDR_YES
- JNI_ABORT
- JNI_COMMIT
- JNI_EDETACHED
- JNI_EEXIST
- JNI_EINVAL
- JNI_ENOMEM
- JNI_ERR
- JNI_EVERSION
- JNI_OK
- JNI_VERSION_1_1
- JNI_VERSION_1_2
- JNI_VERSION_1_4
- JNI_VERSION_1_6
- KEYBOARD
- KEYBOARD_12KEY
- KEYBOARD_ANY
- KEYBOARD_HIDDEN
- KEYBOARD_NOKEYS
- KEYBOARD_QWERTY
- KEYSHIDDEN_ANY
- KEYSHIDDEN_NO
- KEYSHIDDEN_SOFT
- KEYSHIDDEN_YES
- LAYOUTDIR
- LAYOUTDIR_ANY
- LAYOUTDIR_LTR
- LAYOUTDIR_RTL
- LOCALE
- MCC
- MNC
- MNC_ZERO
- NAVHIDDEN_ANY
- NAVHIDDEN_NO
- NAVHIDDEN_YES
- NAVIGATION
- NAVIGATION_ANY
- NAVIGATION_DPAD
- NAVIGATION_NONAV
- NAVIGATION_TRACKBALL
- NAVIGATION_WHEEL
- NETWORK_UNSPECIFIED
- ORIENTATION
- ORIENTATION_ANY
- ORIENTATION_LAND
- ORIENTATION_PORT
- ORIENTATION_SQUARE
- SCREENLONG_ANY
- SCREENLONG_NO
- SCREENLONG_YES
- SCREENROUND_ANY
- SCREENROUND_NO
- SCREENROUND_YES
- SCREENSIZE_ANY
- SCREENSIZE_LARGE
- SCREENSIZE_NORMAL
- SCREENSIZE_SMALL
- SCREENSIZE_XLARGE
- SCREEN_HEIGHT_DP_ANY
- SCREEN_LAYOUT
- SCREEN_ROUND
- SCREEN_SIZE
- SCREEN_WIDTH_DP_ANY
- SMALLEST_SCREEN_SIZE
- SMALLEST_SCREEN_WIDTH_DP_ANY
- TOUCHSCREEN
- TOUCHSCREEN_ANY
- TOUCHSCREEN_FINGER
- TOUCHSCREEN_NOTOUCH
- TOUCHSCREEN_STYLUS
- UI_MODE
- UI_MODE_NIGHT_ANY
- UI_MODE_NIGHT_NO
- UI_MODE_NIGHT_YES
- UI_MODE_TYPE_ANY
- UI_MODE_TYPE_APPLIANCE
- UI_MODE_TYPE_CAR
- UI_MODE_TYPE_DESK
- UI_MODE_TYPE_NORMAL
- UI_MODE_TYPE_TELEVISION
- UI_MODE_TYPE_VR_HEADSET
- UI_MODE_TYPE_WATCH
- VERSION
- WIDE_COLOR_GAMUT_ANY
- WIDE_COLOR_GAMUT_NO
- WIDE_COLOR_GAMUT_YES
Variables (0)
This section is empty.
Procedures (475)
- AAssetDir_close
- AAssetDir_getNextFileName
- AAssetDir_rewind
- AAssetManager_fromJava
- AAssetManager_open
- AAssetManager_openDir
- AAsset_close
- AAsset_getBuffer
- AAsset_getLength
- AAsset_getLength64
- AAsset_getRemainingLength
- AAsset_getRemainingLength64
- AAsset_isAllocated
- AAsset_openFileDescriptor
- AAsset_openFileDescriptor64
- AAsset_read
- AAsset_seek
- AAsset_seek64
- AChoreographerFrameCallbackData_getFrameTimeNanos
- AChoreographerFrameCallbackData_getFrameTimelineDeadlineNanos
- AChoreographerFrameCallbackData_getFrameTimelineExpectedPresentationTimeNanos
- AChoreographerFrameCallbackData_getFrameTimelineVsyncId
- AChoreographerFrameCallbackData_getFrameTimelinesLength
- AChoreographerFrameCallbackData_getPreferredFrameTimelineIndex
- AChoreographer_getInstance
- AChoreographer_postFrameCallback
- AChoreographer_postFrameCallback64
- AChoreographer_postFrameCallbackDelayed
- AChoreographer_postFrameCallbackDelayed64
- AChoreographer_postVsyncCallback
- AChoreographer_registerRefreshRateCallback
- AChoreographer_unregisterRefreshRateCallback
- AConfiguration_copy
- AConfiguration_delete
- AConfiguration_diff
- AConfiguration_fromAssetManager
- AConfiguration_getCountry
- AConfiguration_getDensity
- AConfiguration_getGrammaticalGender
- AConfiguration_getKeyboard
- AConfiguration_getKeysHidden
- AConfiguration_getLanguage
- AConfiguration_getLayoutDirection
- AConfiguration_getMcc
- AConfiguration_getMnc
- AConfiguration_getNavHidden
- AConfiguration_getNavigation
- AConfiguration_getOrientation
- AConfiguration_getScreenHeightDp
- AConfiguration_getScreenLong
- AConfiguration_getScreenRound
- AConfiguration_getScreenSize
- AConfiguration_getScreenWidthDp
- AConfiguration_getSdkVersion
- AConfiguration_getSmallestScreenWidthDp
- AConfiguration_getTouchscreen
- AConfiguration_getUiModeNight
- AConfiguration_getUiModeType
- AConfiguration_isBetterThan
- AConfiguration_match
- AConfiguration_new
- AConfiguration_setCountry
- AConfiguration_setDensity
- AConfiguration_setGrammaticalGender
- AConfiguration_setKeyboard
- AConfiguration_setKeysHidden
- AConfiguration_setLanguage
- AConfiguration_setLayoutDirection
- AConfiguration_setMcc
- AConfiguration_setMnc
- AConfiguration_setNavHidden
- AConfiguration_setNavigation
- AConfiguration_setOrientation
- AConfiguration_setScreenHeightDp
- AConfiguration_setScreenLong
- AConfiguration_setScreenRound
- AConfiguration_setScreenSize
- AConfiguration_setScreenWidthDp
- AConfiguration_setSdkVersion
- AConfiguration_setSmallestScreenWidthDp
- AConfiguration_setTouchscreen
- AConfiguration_setUiModeNight
- AConfiguration_setUiModeType
- AFileDescriptor_create
- AFileDescriptor_getFd
- AFileDescriptor_setFd
- AFontMatcher_create
- AFontMatcher_destroy
- AFontMatcher_match
- AFontMatcher_setFamilyVariant
- AFontMatcher_setLocales
- AFontMatcher_setStyle
- AFont_close
- AFont_getAxisCount
- AFont_getAxisTag
- AFont_getAxisValue
- AFont_getCollectionIndex
- AFont_getFontFilePath
- AFont_getLocale
- AFont_getWeight
- AFont_isItalic
- AHardwareBuffer_acquire
- AHardwareBuffer_allocate
- AHardwareBuffer_describe
- AHardwareBuffer_fromHardwareBuffer
- AHardwareBuffer_getId
- AHardwareBuffer_isSupported
- AHardwareBuffer_lock
- AHardwareBuffer_lockAndGetInfo
- AHardwareBuffer_lockPlanes
- AHardwareBuffer_recvHandleFromUnixSocket
- AHardwareBuffer_release
- AHardwareBuffer_sendHandleToUnixSocket
- AHardwareBuffer_toHardwareBuffer
- AHardwareBuffer_unlock
- AImageDecoderFrameInfo_create
- AImageDecoderFrameInfo_delete
- AImageDecoderFrameInfo_getBlendOp
- AImageDecoderFrameInfo_getDisposeOp
- AImageDecoderFrameInfo_getDuration
- AImageDecoderFrameInfo_getFrameRect
- AImageDecoderFrameInfo_hasAlphaWithinBounds
- AImageDecoderHeaderInfo_getAlphaFlags
- AImageDecoderHeaderInfo_getAndroidBitmapFormat
- AImageDecoderHeaderInfo_getDataSpace
- AImageDecoderHeaderInfo_getHeight
- AImageDecoderHeaderInfo_getMimeType
- AImageDecoderHeaderInfo_getWidth
- AImageDecoder_advanceFrame
- AImageDecoder_computeSampledSize
- AImageDecoder_createFromAAsset
- AImageDecoder_createFromBuffer
- AImageDecoder_createFromFd
- AImageDecoder_decodeImage
- AImageDecoder_delete
- AImageDecoder_getFrameInfo
- AImageDecoder_getHeaderInfo
- AImageDecoder_getMinimumStride
- AImageDecoder_getRepeatCount
- AImageDecoder_isAnimated
- AImageDecoder_resultToString
- AImageDecoder_rewind
- AImageDecoder_setAndroidBitmapFormat
- AImageDecoder_setCrop
- AImageDecoder_setDataSpace
- AImageDecoder_setInternallyHandleDisposePrevious
- AImageDecoder_setTargetSize
- AImageDecoder_setUnpremultipliedRequired
- AInputEvent_getDeviceId
- AInputEvent_getSource
- AInputEvent_getType
- AInputEvent_release
- AInputQueue_attachLooper
- AInputQueue_detachLooper
- AInputQueue_finishEvent
- AInputQueue_fromJava
- AInputQueue_getEvent
- AInputQueue_hasEvents
- AInputQueue_preDispatchEvent
- AKeyEvent_fromJava
- AKeyEvent_getAction
- AKeyEvent_getDownTime
- AKeyEvent_getEventTime
- AKeyEvent_getFlags
- AKeyEvent_getKeyCode
- AKeyEvent_getMetaState
- AKeyEvent_getRepeatCount
- AKeyEvent_getScanCode
- ALooper_acquire
- ALooper_addFd
- ALooper_forThread
- ALooper_pollAll
- ALooper_pollOnce
- ALooper_prepare
- ALooper_release
- ALooper_removeFd
- ALooper_wake
- AMotionEvent_fromJava
- AMotionEvent_getAction
- AMotionEvent_getActionButton
- AMotionEvent_getAxisValue
- AMotionEvent_getButtonState
- AMotionEvent_getClassification
- AMotionEvent_getDownTime
- AMotionEvent_getEdgeFlags
- AMotionEvent_getEventTime
- AMotionEvent_getFlags
- AMotionEvent_getHistoricalAxisValue
- AMotionEvent_getHistoricalEventTime
- AMotionEvent_getHistoricalOrientation
- AMotionEvent_getHistoricalPressure
- AMotionEvent_getHistoricalRawX
- AMotionEvent_getHistoricalRawY
- AMotionEvent_getHistoricalSize
- AMotionEvent_getHistoricalToolMajor
- AMotionEvent_getHistoricalToolMinor
- AMotionEvent_getHistoricalTouchMajor
- AMotionEvent_getHistoricalTouchMinor
- AMotionEvent_getHistoricalX
- AMotionEvent_getHistoricalY
- AMotionEvent_getHistorySize
- AMotionEvent_getMetaState
- AMotionEvent_getOrientation
- AMotionEvent_getPointerCount
- AMotionEvent_getPointerId
- AMotionEvent_getPressure
- AMotionEvent_getRawX
- AMotionEvent_getRawY
- AMotionEvent_getSize
- AMotionEvent_getToolMajor
- AMotionEvent_getToolMinor
- AMotionEvent_getToolType
- AMotionEvent_getTouchMajor
- AMotionEvent_getTouchMinor
- AMotionEvent_getX
- AMotionEvent_getXOffset
- AMotionEvent_getXPrecision
- AMotionEvent_getY
- AMotionEvent_getYOffset
- AMotionEvent_getYPrecision
- ANativeActivity_finish
- ANativeActivity_hideSoftInput
- ANativeActivity_setWindowFlags
- ANativeActivity_setWindowFormat
- ANativeActivity_showSoftInput
- ANativeWindow_acquire
- ANativeWindow_clearFrameRate
- ANativeWindow_fromSurface
- ANativeWindow_getBuffersDataSpace
- ANativeWindow_getBuffersDefaultDataSpace
- ANativeWindow_getFormat
- ANativeWindow_getHeight
- ANativeWindow_getWidth
- ANativeWindow_lock
- ANativeWindow_release
- ANativeWindow_setBuffersDataSpace
- ANativeWindow_setBuffersGeometry
- ANativeWindow_setBuffersTransform
- ANativeWindow_setFrameRate
- ANativeWindow_setFrameRateWithChangeStrategy
- ANativeWindow_toSurface
- ANativeWindow_tryAllocateBuffers
- ANativeWindow_unlockAndPost
- ANeuralNetworksBurst_create
- ANeuralNetworksBurst_free
- ANeuralNetworksCompilation_create
- ANeuralNetworksCompilation_createForDevices
- ANeuralNetworksCompilation_finish
- ANeuralNetworksCompilation_free
- ANeuralNetworksCompilation_getPreferredMemoryAlignmentForInput
- ANeuralNetworksCompilation_getPreferredMemoryAlignmentForOutput
- ANeuralNetworksCompilation_getPreferredMemoryPaddingForInput
- ANeuralNetworksCompilation_getPreferredMemoryPaddingForOutput
- ANeuralNetworksCompilation_setCaching
- ANeuralNetworksCompilation_setPreference
- ANeuralNetworksCompilation_setPriority
- ANeuralNetworksCompilation_setTimeout
- ANeuralNetworksDevice_getFeatureLevel
- ANeuralNetworksDevice_getName
- ANeuralNetworksDevice_getType
- ANeuralNetworksDevice_getVersion
- ANeuralNetworksDevice_wait
- ANeuralNetworksEvent_createFromSyncFenceFd
- ANeuralNetworksEvent_free
- ANeuralNetworksEvent_getSyncFenceFd
- ANeuralNetworksEvent_wait
- ANeuralNetworksExecution_burstCompute
- ANeuralNetworksExecution_compute
- ANeuralNetworksExecution_create
- ANeuralNetworksExecution_enableInputAndOutputPadding
- ANeuralNetworksExecution_free
- ANeuralNetworksExecution_getDuration
- ANeuralNetworksExecution_getOutputOperandDimensions
- ANeuralNetworksExecution_getOutputOperandRank
- ANeuralNetworksExecution_setInput
- ANeuralNetworksExecution_setInputFromMemory
- ANeuralNetworksExecution_setLoopTimeout
- ANeuralNetworksExecution_setMeasureTiming
- ANeuralNetworksExecution_setOutput
- ANeuralNetworksExecution_setOutputFromMemory
- ANeuralNetworksExecution_setReusable
- ANeuralNetworksExecution_setTimeout
- ANeuralNetworksExecution_startCompute
- ANeuralNetworksExecution_startComputeWithDependencies
- ANeuralNetworksMemoryDesc_addInputRole
- ANeuralNetworksMemoryDesc_addOutputRole
- ANeuralNetworksMemoryDesc_create
- ANeuralNetworksMemoryDesc_finish
- ANeuralNetworksMemoryDesc_free
- ANeuralNetworksMemoryDesc_setDimensions
- ANeuralNetworksMemory_copy
- ANeuralNetworksMemory_createFromAHardwareBuffer
- ANeuralNetworksMemory_createFromDesc
- ANeuralNetworksMemory_createFromFd
- ANeuralNetworksMemory_free
- ANeuralNetworksModel_addOperand
- ANeuralNetworksModel_addOperation
- ANeuralNetworksModel_create
- ANeuralNetworksModel_finish
- ANeuralNetworksModel_free
- ANeuralNetworksModel_getSupportedOperationsForDevices
- ANeuralNetworksModel_identifyInputsAndOutputs
- ANeuralNetworksModel_relaxComputationFloat32toFloat16
- ANeuralNetworksModel_setOperandSymmPerChannelQuantParams
- ANeuralNetworksModel_setOperandValue
- ANeuralNetworksModel_setOperandValueFromMemory
- ANeuralNetworksModel_setOperandValueFromModel
- ANeuralNetworks_getDefaultLoopTimeout
- ANeuralNetworks_getDevice
- ANeuralNetworks_getDeviceCount
- ANeuralNetworks_getMaximumLoopTimeout
- ANeuralNetworks_getRuntimeFeatureLevel
- AObbInfo_delete
- AObbInfo_getFlags
- AObbInfo_getPackageName
- AObbInfo_getVersion
- AObbScanner_getObbInfo
- APerformanceHint_closeSession
- APerformanceHint_createSession
- APerformanceHint_getManager
- APerformanceHint_getPreferredUpdateRateNanos
- APerformanceHint_reportActualWorkDuration
- APerformanceHint_reportActualWorkDuration2
- APerformanceHint_setPreferPowerEfficiency
- APerformanceHint_setThreads
- APerformanceHint_updateTargetWorkDuration
- APermissionManager_checkPermission
- ASensorEventQueue_disableSensor
- ASensorEventQueue_enableSensor
- ASensorEventQueue_getEvents
- ASensorEventQueue_hasEvents
- ASensorEventQueue_registerSensor
- ASensorEventQueue_requestAdditionalInfoEvents
- ASensorEventQueue_setEventRate
- ASensorManager_configureDirectReport
- ASensorManager_createEventQueue
- ASensorManager_createHardwareBufferDirectChannel
- ASensorManager_createSharedMemoryDirectChannel
- ASensorManager_destroyDirectChannel
- ASensorManager_destroyEventQueue
- ASensorManager_getDefaultSensor
- ASensorManager_getDefaultSensorEx
- ASensorManager_getDynamicSensorList
- ASensorManager_getInstance
- ASensorManager_getInstanceForPackage
- ASensorManager_getSensorList
- ASensor_getFifoMaxEventCount
- ASensor_getFifoReservedEventCount
- ASensor_getHandle
- ASensor_getHighestDirectReportRateLevel
- ASensor_getMinDelay
- ASensor_getName
- ASensor_getReportingMode
- ASensor_getResolution
- ASensor_getStringType
- ASensor_getType
- ASensor_getVendor
- ASensor_isDirectChannelTypeSupported
- ASensor_isWakeUpSensor
- ASharedMemory_create
- ASharedMemory_dupFromJava
- ASharedMemory_getSize
- ASharedMemory_setProt
- AStorageManager_delete
- AStorageManager_getMountedObbPath
- AStorageManager_isObbMounted
- AStorageManager_mountObb
- AStorageManager_new
- AStorageManager_unmountObb
- ASurfaceControl_acquire
- ASurfaceControl_create
- ASurfaceControl_createFromWindow
- ASurfaceControl_release
- ASurfaceTexture_acquireANativeWindow
- ASurfaceTexture_attachToGLContext
- ASurfaceTexture_detachFromGLContext
- ASurfaceTexture_fromSurfaceTexture
- ASurfaceTexture_getTimestamp
- ASurfaceTexture_getTransformMatrix
- ASurfaceTexture_release
- ASurfaceTexture_updateTexImage
- ASurfaceTransactionStats_getASurfaceControls
- ASurfaceTransactionStats_getAcquireTime
- ASurfaceTransactionStats_getLatchTime
- ASurfaceTransactionStats_getPresentFenceFd
- ASurfaceTransactionStats_getPreviousReleaseFenceFd
- ASurfaceTransactionStats_releaseASurfaceControls
- ASurfaceTransaction_apply
- ASurfaceTransaction_clearFrameRate
- ASurfaceTransaction_create
- ASurfaceTransaction_delete
- ASurfaceTransaction_reparent
- ASurfaceTransaction_setBuffer
- ASurfaceTransaction_setBufferAlpha
- ASurfaceTransaction_setBufferDataSpace
- ASurfaceTransaction_setBufferTransform
- ASurfaceTransaction_setBufferTransparency
- ASurfaceTransaction_setBufferWithRelease
- ASurfaceTransaction_setColor
- ASurfaceTransaction_setCrop
- ASurfaceTransaction_setDamageRegion
- ASurfaceTransaction_setDesiredHdrHeadroom
- ASurfaceTransaction_setDesiredPresentTime
- ASurfaceTransaction_setEnableBackPressure
- ASurfaceTransaction_setExtendedRangeBrightness
- ASurfaceTransaction_setFrameRate
- ASurfaceTransaction_setFrameRateWithChangeStrategy
- ASurfaceTransaction_setFrameTimeline
- ASurfaceTransaction_setGeometry
- ASurfaceTransaction_setHdrMetadata_cta861_3
- ASurfaceTransaction_setHdrMetadata_smpte2086
- ASurfaceTransaction_setOnCommit
- ASurfaceTransaction_setOnComplete
- ASurfaceTransaction_setPosition
- ASurfaceTransaction_setScale
- ASurfaceTransaction_setVisibility
- ASurfaceTransaction_setZOrder
- ASystemFontIterator_close
- ASystemFontIterator_next
- ASystemFontIterator_open
- AThermal_acquireManager
- AThermal_getCurrentThermalStatus
- AThermal_getThermalHeadroom
- AThermal_getThermalHeadroomThresholds
- AThermal_registerThermalStatusListener
- AThermal_releaseManager
- AThermal_unregisterThermalStatusListener
- ATrace_beginAsyncSection
- ATrace_beginSection
- ATrace_endAsyncSection
- ATrace_endSection
- ATrace_isEnabled
- ATrace_setCounter
- AWorkDuration_create
- AWorkDuration_release
- AWorkDuration_setActualCpuDurationNanos
- AWorkDuration_setActualGpuDurationNanos
- AWorkDuration_setActualTotalDurationNanos
- AWorkDuration_setWorkPeriodStartTimestampNanos
- AndroidBitmap_compress
- AndroidBitmap_getDataSpace
- AndroidBitmap_getHardwareBuffer
- AndroidBitmap_getInfo
- AndroidBitmap_lockPixels
- AndroidBitmap_unlockPixels
- android_dlopen_ext
- android_fdsan_close_with_tag
- android_fdsan_create_owner_tag
- android_fdsan_exchange_owner_tag
- android_fdsan_get_error_level
- android_fdsan_get_owner_tag
- android_fdsan_get_tag_type
- android_fdsan_get_tag_value
- android_fdsan_set_error_level
- android_fdsan_set_error_level_from_property
- android_getaddrinfofornetwork
- android_getprocdns
- android_getprocnetwork
- android_res_cancel
- android_res_nquery
- android_res_nresult
- android_res_nsend
- android_set_abort_message
- android_setprocdns
- android_setprocnetwork
- android_setsocknetwork
- android_tag_socket
- android_tag_socket_with_uid
- android_untag_socket
- app_dummy
- asset_read_file
- get_android_app
- sync_file_info_free
- sync_get_fence_info
- sync_merge
Procedure Groups (0)
This section is empty.
Types
AAdditionalInfoEvent ¶
AAdditionalInfoEvent :: struct { // * // * Event type, such as ASENSOR_ADDITIONAL_INFO_BEGIN, ASENSOR_ADDITIONAL_INFO_END and others. // * Refer to {@link ASENSOR_TYPE_ADDITIONAL_INFO} for the expected reporting behavior. type: SensorAdditionalInfo, serial: i32, using _: struct #raw_union { data_int32: [14]i32, data_float: [14]f32, }, }
AAsset ¶
AAsset :: struct {}
* * {@link AAsset} provides access to a read-only asset. * * {@link AAsset} objects are NOT thread-safe, and should not be shared across * threads.
Related Procedures With Parameters
Related Procedures With Returns
AAssetDir ¶
AAssetDir :: struct {}
* * {@link AAssetDir} provides access to a chunk of the asset hierarchy as if * it were a single directory. The contents are populated by the * {@link AAssetManager}. * * The list of files will be sorted in ascending order by ASCII value.
Related Procedures With Parameters
Related Procedures With Returns
AAssetManager ¶
AAssetManager :: struct {}
* * {@link AAssetManager} provides access to an application's raw assets by * creating {@link AAsset} objects. * * AAssetManager is a wrapper to the low-level native implementation * of the java {@link AAssetManager}, a pointer can be obtained using * AAssetManager_fromJava(). * * The asset hierarchy may be examined like a filesystem, using * {@link AAssetDir} objects to peruse a single directory. * * A native {@link AAssetManager} pointer may be shared across multiple threads.
Related Procedures With Parameters
Related Procedures With Returns
ABitmapResult ¶
ABitmapResult :: enum i32 { SUCCESS = 0, BAD_PARAMETER = -1, JNI_EXCEPTION = -2, ALLOCATION_FAILED = -3, }
AndroidBitmap functions result code.
Related Procedures With Returns
AChoreographer ¶
AChoreographer :: struct {}
* * Opaque type that provides access to an AChoreographer object. * * A pointer can be obtained using {@link AChoreographer_getInstance()}.
Related Procedures With Parameters
Related Procedures With Returns
AChoreographerFrameCallbackData ¶
AChoreographerFrameCallbackData :: struct {}
* * Opaque type that provides access to an AChoreographerFrameCallbackData object, which contains * various methods to extract frame information.
Related Procedures With Parameters
- AChoreographerFrameCallbackData_getFrameTimeNanos
- AChoreographerFrameCallbackData_getFrameTimelineDeadlineNanos
- AChoreographerFrameCallbackData_getFrameTimelineExpectedPresentationTimeNanos
- AChoreographerFrameCallbackData_getFrameTimelineVsyncId
- AChoreographerFrameCallbackData_getFrameTimelinesLength
- AChoreographerFrameCallbackData_getPreferredFrameTimelineIndex
AChoreographer_frameCallback ¶
* * Prototype of the function that is called when a new frame is being rendered. * It's passed the time that the frame is being rendered as nanoseconds in the * CLOCK_MONOTONIC time base, as well as the data pointer provided by the * application that registered a callback. All callbacks that run as part of * rendering a frame will observe the same frame time, so it should be used * whenever events need to be synchronized (e.g. animations).
Related Procedures With Parameters
AChoreographer_frameCallback64 ¶
* * Prototype of the function that is called when a new frame is being rendered. * It's passed the time that the frame is being rendered as nanoseconds in the * CLOCK_MONOTONIC time base, as well as the data pointer provided by the * application that registered a callback. All callbacks that run as part of * rendering a frame will observe the same frame time, so it should be used * whenever events need to be synchronized (e.g. animations).
Related Procedures With Parameters
AChoreographer_refreshRateCallback ¶
* * Prototype of the function that is called when the display refresh rate * changes. It's passed the new vsync period in nanoseconds, as well as the data * pointer provided by the application that registered a callback.
Related Procedures With Parameters
AChoreographer_vsyncCallback ¶
AChoreographer_vsyncCallback :: proc "c" (callbackData: ^AChoreographerFrameCallbackData, data: rawptr)
* * Prototype of the function that is called when a new frame is being rendered. * It is called with \c callbackData describing multiple frame timelines, as well as the \c data * pointer provided by the application that registered a callback. The \c callbackData does not * outlive the callback.
Related Procedures With Parameters
AColor_xy ¶
* * Color is defined in CIE XYZ coordinates.
AConfiguration ¶
AConfiguration :: struct {}
* * {@link AConfiguration} is an opaque type used to get and set * various subsystem configurations. * * A {@link AConfiguration} pointer can be obtained using: * - AConfiguration_new() * - AConfiguration_fromAssetManager()
Related Procedures With Parameters
- AConfiguration_copy
- AConfiguration_delete
- AConfiguration_diff
- AConfiguration_fromAssetManager
- AConfiguration_getCountry
- AConfiguration_getDensity
- AConfiguration_getGrammaticalGender
- AConfiguration_getKeyboard
- AConfiguration_getKeysHidden
- AConfiguration_getLanguage
- AConfiguration_getLayoutDirection
- AConfiguration_getMcc
- AConfiguration_getMnc
- AConfiguration_getNavHidden
- AConfiguration_getNavigation
- AConfiguration_getOrientation
- AConfiguration_getScreenHeightDp
- AConfiguration_getScreenLong
- AConfiguration_getScreenRound
- AConfiguration_getScreenSize
- AConfiguration_getScreenWidthDp
- AConfiguration_getSdkVersion
- AConfiguration_getSmallestScreenWidthDp
- AConfiguration_getTouchscreen
- AConfiguration_getUiModeNight
- AConfiguration_getUiModeType
- AConfiguration_isBetterThan
- AConfiguration_match
- AConfiguration_setCountry
- AConfiguration_setDensity
- AConfiguration_setGrammaticalGender
- AConfiguration_setKeyboard
- AConfiguration_setKeysHidden
- AConfiguration_setLanguage
- AConfiguration_setLayoutDirection
- AConfiguration_setMcc
- AConfiguration_setMnc
- AConfiguration_setNavHidden
- AConfiguration_setNavigation
- AConfiguration_setOrientation
- AConfiguration_setScreenHeightDp
- AConfiguration_setScreenLong
- AConfiguration_setScreenRound
- AConfiguration_setScreenSize
- AConfiguration_setScreenWidthDp
- AConfiguration_setSdkVersion
- AConfiguration_setSmallestScreenWidthDp
- AConfiguration_setTouchscreen
- AConfiguration_setUiModeNight
- AConfiguration_setUiModeType
Related Procedures With Returns
ADataSpace ¶
ADataSpace :: enum i32 { // * // * Default-assumption data space, when not explicitly specified. // * // * It is safest to assume the buffer is an image with sRGB primaries and // * encoding ranges, but the consumer and/or the producer of the data may // * simply be using defaults. No automatic gamma transform should be // * expected, except for a possible display gamma transform when drawn to a // * screen. UNKNOWN = 0, // * // * Standard aspect // * // * Defines the chromaticity coordinates of the source primaries in terms of // * the CIE 1931 definition of x and y specified in ISO 11664-1. STANDARD_MASK = 4128768, // * // * Chromacity coordinates are unknown or are determined by the application. // * Implementations shall use the following suggested standards: // * // * All YCbCr formats: BT709 if size is 720p or larger (since most video // * content is letterboxed this corresponds to width is // * 1280 or greater, or height is 720 or greater). // * BT601_625 if size is smaller than 720p or is JPEG. // * All RGB formats: BT709. // * // * For all other formats standard is undefined, and implementations should use // * an appropriate standard for the data represented. STANDARD_UNSPECIFIED = 0, // * // * Primaries: x y // * green 0.300 0.600 // * blue 0.150 0.060 // * red 0.640 0.330 // * white (D65) 0.3127 0.3290 // * // * Use the unadjusted KR = 0.2126, KB = 0.0722 luminance interpretation // * for RGB conversion. STANDARD_BT709 = 65536, // * // * Primaries: x y // * green 0.290 0.600 // * blue 0.150 0.060 // * red 0.640 0.330 // * white (D65) 0.3127 0.3290 // * // * KR = 0.299, KB = 0.114. This adjusts the luminance interpretation // * for RGB conversion from the one purely determined by the primaries // * to minimize the color shift into RGB space that uses BT.709 // * primaries. STANDARD_BT601_625 = 131072, // * // * Primaries: x y // * green 0.290 0.600 // * blue 0.150 0.060 // * red 0.640 0.330 // * white (D65) 0.3127 0.3290 // * // * Use the unadjusted KR = 0.222, KB = 0.071 luminance interpretation // * for RGB conversion. STANDARD_BT601_625_UNADJUSTED = 196608, // * // * Primaries: x y // * green 0.310 0.595 // * blue 0.155 0.070 // * red 0.630 0.340 // * white (D65) 0.3127 0.3290 // * // * KR = 0.299, KB = 0.114. This adjusts the luminance interpretation // * for RGB conversion from the one purely determined by the primaries // * to minimize the color shift into RGB space that uses BT.709 // * primaries. STANDARD_BT601_525 = 262144, // * // * Primaries: x y // * green 0.310 0.595 // * blue 0.155 0.070 // * red 0.630 0.340 // * white (D65) 0.3127 0.3290 // * // * Use the unadjusted KR = 0.212, KB = 0.087 luminance interpretation // * for RGB conversion (as in SMPTE 240M). STANDARD_BT601_525_UNADJUSTED = 327680, // * // * Primaries: x y // * green 0.170 0.797 // * blue 0.131 0.046 // * red 0.708 0.292 // * white (D65) 0.3127 0.3290 // * // * Use the unadjusted KR = 0.2627, KB = 0.0593 luminance interpretation // * for RGB conversion. STANDARD_BT2020 = 393216, // * // * Primaries: x y // * green 0.170 0.797 // * blue 0.131 0.046 // * red 0.708 0.292 // * white (D65) 0.3127 0.3290 // * // * Use the unadjusted KR = 0.2627, KB = 0.0593 luminance interpretation // * for RGB conversion using the linear domain. STANDARD_BT2020_CONSTANT_LUMINANCE = 458752, // * // * Primaries: x y // * green 0.21 0.71 // * blue 0.14 0.08 // * red 0.67 0.33 // * white (C) 0.310 0.316 // * // * Use the unadjusted KR = 0.30, KB = 0.11 luminance interpretation // * for RGB conversion. STANDARD_BT470M = 524288, // * // * Primaries: x y // * green 0.243 0.692 // * blue 0.145 0.049 // * red 0.681 0.319 // * white (C) 0.310 0.316 // * // * Use the unadjusted KR = 0.254, KB = 0.068 luminance interpretation // * for RGB conversion. STANDARD_FILM = 589824, // * // * SMPTE EG 432-1 and SMPTE RP 431-2. (DCI-P3) // * Primaries: x y // * green 0.265 0.690 // * blue 0.150 0.060 // * red 0.680 0.320 // * white (D65) 0.3127 0.3290 STANDARD_DCI_P3 = 655360, // * // * Adobe RGB // * Primaries: x y // * green 0.210 0.710 // * blue 0.150 0.060 // * red 0.640 0.330 // * white (D65) 0.3127 0.3290 STANDARD_ADOBE_RGB = 720896, // * // * Transfer aspect // * // * Transfer characteristics are the opto-electronic transfer characteristic // * at the source as a function of linear optical intensity (luminance). // * // * For digital signals, E corresponds to the recorded value. Normally, the // * transfer function is applied in RGB space to each of the R, G and B // * components independently. This may result in color shift that can be // * minized by applying the transfer function in Lab space only for the L // * component. Implementation may apply the transfer function in RGB space // * for all pixel formats if desired. TRANSFER_MASK = 130023424, // * // * Transfer characteristics are unknown or are determined by the // * application. // * // * Implementations should use the following transfer functions: // * // * For YCbCr formats: use TRANSFER_SMPTE_170M // * For RGB formats: use TRANSFER_SRGB // * // * For all other formats transfer function is undefined, and implementations // * should use an appropriate standard for the data represented. TRANSFER_UNSPECIFIED = 0, // * // * Transfer characteristic curve: // * E = L // * L - luminance of image 0 <= L <= 1 for conventional colorimetry // * E - corresponding electrical signal TRANSFER_LINEAR = 4194304, // * // * Transfer characteristic curve: // * // * E = 1.055 * L^(1/2.4) - 0.055 for 0.0031308 <= L <= 1 // * = 12.92 * L for 0 <= L < 0.0031308 // * L - luminance of image 0 <= L <= 1 for conventional colorimetry // * E - corresponding electrical signal TRANSFER_SRGB = 8388608, // * // * BT.601 525, BT.601 625, BT.709, BT.2020 // * // * Transfer characteristic curve: // * E = 1.099 * L ^ 0.45 - 0.099 for 0.018 <= L <= 1 // * = 4.500 * L for 0 <= L < 0.018 // * L - luminance of image 0 <= L <= 1 for conventional colorimetry // * E - corresponding electrical signal TRANSFER_SMPTE_170M = 12582912, // * // * Assumed display gamma 2.2. // * // * Transfer characteristic curve: // * E = L ^ (1/2.2) // * L - luminance of image 0 <= L <= 1 for conventional colorimetry // * E - corresponding electrical signal TRANSFER_GAMMA2_2 = 16777216, // * // * display gamma 2.6. // * // * Transfer characteristic curve: // * E = L ^ (1/2.6) // * L - luminance of image 0 <= L <= 1 for conventional colorimetry // * E - corresponding electrical signal TRANSFER_GAMMA2_6 = 20971520, // * // * display gamma 2.8. // * // * Transfer characteristic curve: // * E = L ^ (1/2.8) // * L - luminance of image 0 <= L <= 1 for conventional colorimetry // * E - corresponding electrical signal TRANSFER_GAMMA2_8 = 25165824, // * // * SMPTE ST 2084 (Dolby Perceptual Quantizer) // * // * Transfer characteristic curve: // * E = ((c1 + c2 * L^n) / (1 + c3 * L^n)) ^ m // * c1 = c3 - c2 + 1 = 3424 / 4096 = 0.8359375 // * c2 = 32 * 2413 / 4096 = 18.8515625 // * c3 = 32 * 2392 / 4096 = 18.6875 // * m = 128 * 2523 / 4096 = 78.84375 // * n = 0.25 * 2610 / 4096 = 0.1593017578125 // * L - luminance of image 0 <= L <= 1 for HDR colorimetry. // * L = 1 corresponds to 10000 cd/m2 // * E - corresponding electrical signal TRANSFER_ST2084 = 29360128, // * // * ARIB STD-B67 Hybrid Log Gamma // * // * Transfer characteristic curve: // * E = r * L^0.5 for 0 <= L <= 1 // * = a * ln(L - b) + c for 1 < L // * a = 0.17883277 // * b = 0.28466892 // * c = 0.55991073 // * r = 0.5 // * L - luminance of image 0 <= L for HDR colorimetry. L = 1 corresponds // * to reference white level of 100 cd/m2 // * E - corresponding electrical signal TRANSFER_HLG = 33554432, // * // * Range aspect // * // * Defines the range of values corresponding to the unit range of 0-1. // * This is defined for YCbCr only, but can be expanded to RGB space. RANGE_MASK = 939524096, // * // * Range is unknown or are determined by the application. Implementations // * shall use the following suggested ranges: // * // * All YCbCr formats: limited range. // * All RGB or RGBA formats (including RAW and Bayer): full range. // * All Y formats: full range // * // * For all other formats range is undefined, and implementations should use // * an appropriate range for the data represented. RANGE_UNSPECIFIED = 0, // * // * Full range uses all values for Y, Cb and Cr from // * 0 to 2^b-1, where b is the bit depth of the color format. RANGE_FULL = 134217728, // * // * Limited range uses values 16/256*2^b to 235/256*2^b for Y, and // * 1/16*2^b to 15/16*2^b for Cb, Cr, R, G and B, where b is the bit depth of // * the color format. // * // * E.g. For 8-bit-depth formats: // * Luma (Y) samples should range from 16 to 235, inclusive // * Chroma (Cb, Cr) samples should range from 16 to 240, inclusive // * // * For 10-bit-depth formats: // * Luma (Y) samples should range from 64 to 940, inclusive // * Chroma (Cb, Cr) samples should range from 64 to 960, inclusive RANGE_LIMITED = 268435456, // * // * Extended range is used for scRGB. Intended for use with // * floating point pixel formats. [0.0 - 1.0] is the standard // * sRGB space. Values outside the range 0.0 - 1.0 can encode // * color outside the sRGB gamut. // * Used to blend / merge multiple dataspaces on a single display. RANGE_EXTENDED = 402653184, // * // * scRGB linear encoding: // * // * The red, green, and blue components are stored in extended sRGB space, // * but are linear, not gamma-encoded. // * The RGB primaries and the white point are the same as BT.709. // * // * The values are floating point. // * A pixel value of 1.0, 1.0, 1.0 corresponds to sRGB white (D65) at 80 nits. // * Values beyond the range [0.0 - 1.0] would correspond to other colors // * spaces and/or HDR content. SCRGB_LINEAR = 406913024, // STANDARD_BT709 | TRANSFER_LINEAR | RANGE_EXTENDED // * // * sRGB gamma encoding: // * // * The red, green and blue components are stored in sRGB space, and // * converted to linear space when read, using the SRGB transfer function // * for each of the R, G and B components. When written, the inverse // * transformation is performed. // * // * The alpha component, if present, is always stored in linear space and // * is left unmodified when read or written. // * // * Use full range and BT.709 standard. SRGB = 142671872, // STANDARD_BT709 | TRANSFER_SRGB | RANGE_FULL // * // * scRGB: // * // * The red, green, and blue components are stored in extended sRGB space, // * and gamma-encoded using the SRGB transfer function. // * The RGB primaries and the white point are the same as BT.709. // * // * The values are floating point. // * A pixel value of 1.0, 1.0, 1.0 corresponds to sRGB white (D65) at 80 nits. // * Values beyond the range [0.0 - 1.0] would correspond to other colors // * spaces and/or HDR content. SCRGB = 411107328, // STANDARD_BT709 | TRANSFER_SRGB | RANGE_EXTENDED // * // * Display P3 // * // * Use same primaries and white-point as DCI-P3 // * but sRGB transfer function. DISPLAY_P3 = 143261696, // STANDARD_DCI_P3 | TRANSFER_SRGB | RANGE_FULL // * // * ITU-R Recommendation 2020 (BT.2020) // * // * Ultra High-definition television // * // * Use full range, SMPTE 2084 (PQ) transfer and BT2020 standard BT2020_PQ = 163971072, // STANDARD_BT2020 | TRANSFER_ST2084 | RANGE_FULL // * // * ITU-R Recommendation 2020 (BT.2020) // * // * Ultra High-definition television // * // * Use limited range, SMPTE 2084 (PQ) transfer and BT2020 standard BT2020_ITU_PQ = 298188800, // STANDARD_BT2020 | TRANSFER_ST2084 | RANGE_LIMITED // * // * Adobe RGB // * // * Use full range, gamma 2.2 transfer and Adobe RGB primaries // * Note: Application is responsible for gamma encoding the data as // * a 2.2 gamma encoding is not supported in HW. ADOBE_RGB = 151715840, // STANDARD_ADOBE_RGB | TRANSFER_GAMMA2_2 | RANGE_FULL // * // * JPEG File Interchange Format (JFIF) // * // * Same model as BT.601-625, but all values (Y, Cb, Cr) range from 0 to 255 // * // * Use full range, SMPTE 170M transfer and BT.601_625 standard. JFIF = 146931712, // STANDARD_BT601_625 | TRANSFER_SMPTE_170M | RANGE_FULL // * // * ITU-R Recommendation 601 (BT.601) - 525-line // * // * Standard-definition television, 525 Lines (NTSC) // * // * Use limited range, SMPTE 170M transfer and BT.601_525 standard. BT601_625 = 281149440, // STANDARD_BT601_625 | TRANSFER_SMPTE_170M | RANGE_LIMITED // * // * ITU-R Recommendation 709 (BT.709) // * // * High-definition television // * // * Use limited range, SMPTE 170M transfer and BT.709 standard. BT601_525 = 281280512, // STANDARD_BT601_525 | TRANSFER_SMPTE_170M | RANGE_LIMITED // * // * ITU-R Recommendation 2020 (BT.2020) // * // * Ultra High-definition television // * // * Use full range, BT.709 transfer and BT2020 standard BT2020 = 147193856, // STANDARD_BT2020 | TRANSFER_SMPTE_170M | RANGE_FULL // * // * ITU-R Recommendation 709 (BT.709) // * // * High-definition television // * // * Use limited range, BT.709 transfer and BT.709 standard. BT709 = 281083904, // STANDARD_BT709 | TRANSFER_SMPTE_170M | RANGE_LIMITED // * // * SMPTE EG 432-1 and SMPTE RP 431-2. // * // * Digital Cinema DCI-P3 // * // * Use full range, gamma 2.6 transfer and D65 DCI-P3 standard // * Note: Application is responsible for gamma encoding the data as // * a 2.6 gamma encoding is not supported in HW. DCI_P3 = 155844608, // STANDARD_DCI_P3 | TRANSFER_GAMMA2_6 | RANGE_FULL // * // * sRGB linear encoding: // * // * The red, green, and blue components are stored in sRGB space, but // * are linear, not gamma-encoded. // * The RGB primaries and the white point are the same as BT.709. // * // * The values are encoded using the full range ([0,255] for 8-bit) for all // * components. SRGB_LINEAR = 138477568, // STANDARD_BT709 | TRANSFER_LINEAR | RANGE_FULL // * // * Hybrid Log Gamma encoding: // * // * Use full range, hybrid log gamma transfer and BT2020 standard. BT2020_HLG = 168165376, // STANDARD_BT2020 | TRANSFER_HLG | RANGE_FULL // * // * ITU Hybrid Log Gamma encoding: // * // * Use limited range, hybrid log gamma transfer and BT2020 standard. BT2020_ITU_HLG = 302383104, // STANDARD_BT2020 | TRANSFER_HLG | RANGE_LIMITED // * // * Depth: // * // * This value is valid with formats HAL_PIXEL_FORMAT_Y16 and HAL_PIXEL_FORMAT_BLOB. DEPTH = 4096, // * // * ISO 16684-1:2011(E) Dynamic Depth: // * // * Embedded depth metadata following the dynamic depth specification. DYNAMIC_DEPTH = 4098, }
* * ADataSpace.
Related Procedures With Parameters
Related Procedures With Returns
AFont ¶
AFont :: struct {}
* * AFont provides information of the single font configuration.
Related Procedures With Parameters
Related Procedures With Returns
AFontMatcher ¶
AFontMatcher :: struct {}
* * AFontMatcher performs match operation on given parameters and available font files. * This matcher is not a thread-safe object. Do not pass this matcher to other threads.
Related Procedures With Parameters
Related Procedures With Returns
AHardwareBuffer ¶
AHardwareBuffer :: struct {}
* * Opaque handle for a native hardware buffer.
Related Procedures With Parameters
- AHardwareBuffer_acquire
- AHardwareBuffer_describe
- AHardwareBuffer_getId
- AHardwareBuffer_lock
- AHardwareBuffer_lockAndGetInfo
- AHardwareBuffer_lockPlanes
- AHardwareBuffer_release
- AHardwareBuffer_sendHandleToUnixSocket
- AHardwareBuffer_toHardwareBuffer
- AHardwareBuffer_unlock
- ANeuralNetworksMemory_createFromAHardwareBuffer
- ASensorManager_createHardwareBufferDirectChannel
- ASurfaceTransaction_setBuffer
- ASurfaceTransaction_setBufferWithRelease
Related Procedures With Returns
AHardwareBuffer_Desc ¶
AHardwareBuffer_Desc :: struct { width: u32, // /< Width in pixels. height: u32, // * // * Number of images in an image array. AHardwareBuffers with one // * layer correspond to regular 2D textures. AHardwareBuffers with // * more than layer correspond to texture arrays. If the layer count // * is a multiple of 6 and the usage flag // * AHARDWAREBUFFER_USAGE_GPU_CUBE_MAP is present, the buffer is // * a cube map or a cube map array. layers: u32, format: AHardwareBuffer_Format, // /< One of AHardwareBuffer_Format. usage: AHardwareBuffer_UsageFlags, // /< Combination of AHardwareBuffer_UsageFlags. stride: u32, // /< Row stride in pixels, ignored for AHardwareBuffer_allocate() rfu0: u32, // /< Initialize to zero, reserved for future use. rfu1: u64, }
* * Buffer description. Used for allocating new buffers and querying * parameters of existing ones.
Related Procedures With Parameters
AHardwareBuffer_Format ¶
AHardwareBuffer_Format :: enum u32 { // * // * Corresponding formats: // * Vulkan: VK_FORMAT_R8G8B8A8_UNORM // * OpenGL ES: GL_RGBA8 R8G8B8A8_UNORM = 1, // * // * 32 bits per pixel, 8 bits per channel format where alpha values are // * ignored (always opaque). // * Corresponding formats: // * Vulkan: VK_FORMAT_R8G8B8A8_UNORM // * OpenGL ES: GL_RGB8 R8G8B8X8_UNORM = 2, // * // * Corresponding formats: // * Vulkan: VK_FORMAT_R8G8B8_UNORM // * OpenGL ES: GL_RGB8 R8G8B8_UNORM = 3, // * // * Corresponding formats: // * Vulkan: VK_FORMAT_R5G6B5_UNORM_PACK16 // * OpenGL ES: GL_RGB565 R5G6B5_UNORM = 4, // * // * Corresponding formats: // * Vulkan: VK_FORMAT_R16G16B16A16_SFLOAT // * OpenGL ES: GL_RGBA16F R16G16B16A16_FLOAT = 22, // * // * Corresponding formats: // * Vulkan: VK_FORMAT_A2B10G10R10_UNORM_PACK32 // * OpenGL ES: GL_RGB10_A2 R10G10B10A2_UNORM = 43, // * // * Opaque binary blob format. // * Must have height 1 and one layer, with width equal to the buffer // * size in bytes. Corresponds to Vulkan buffers and OpenGL buffer // * objects. Can be bound to the latter using GL_EXT_external_buffer. BLOB = 33, // * // * Corresponding formats: // * Vulkan: VK_FORMAT_D16_UNORM // * OpenGL ES: GL_DEPTH_COMPONENT16 D16_UNORM = 48, // * // * Corresponding formats: // * Vulkan: VK_FORMAT_X8_D24_UNORM_PACK32 // * OpenGL ES: GL_DEPTH_COMPONENT24 D24_UNORM = 49, // * // * Corresponding formats: // * Vulkan: VK_FORMAT_D24_UNORM_S8_UINT // * OpenGL ES: GL_DEPTH24_STENCIL8 D24_UNORM_S8_UINT = 50, // * // * Corresponding formats: // * Vulkan: VK_FORMAT_D32_SFLOAT // * OpenGL ES: GL_DEPTH_COMPONENT32F D32_FLOAT = 51, // * // * Corresponding formats: // * Vulkan: VK_FORMAT_D32_SFLOAT_S8_UINT // * OpenGL ES: GL_DEPTH32F_STENCIL8 D32_FLOAT_S8_UINT = 52, // * // * Corresponding formats: // * Vulkan: VK_FORMAT_S8_UINT // * OpenGL ES: GL_STENCIL_INDEX8 S8_UINT = 53, // * // * YUV 420 888 format. // * Must have an even width and height. Can be accessed in OpenGL // * shaders through an external sampler. Does not support mip-maps // * cube-maps or multi-layered textures. Y8Cb8Cr8_420 = 35, // * // * YUV P010 format. // * Must have an even width and height. Can be accessed in OpenGL // * shaders through an external sampler. Does not support mip-maps // * cube-maps or multi-layered textures. YCbCr_P010 = 54, // * // * YUV P210 format. // * Must have an even width and height. Can be accessed in OpenGL // * shaders through an external sampler. Does not support mip-maps // * cube-maps or multi-layered textures. YCbCr_P210 = 60, // * // * Corresponding formats: // * Vulkan: VK_FORMAT_R8_UNORM // * OpenGL ES: GR_GL_R8 R8_UNORM = 56, // * // * Corresponding formats: // * Vulkan: VK_FORMAT_R16_UINT // * OpenGL ES: GL_R16UI R16_UINT = 57, // * // * Corresponding formats: // * Vulkan: VK_FORMAT_R16G16_UINT // * OpenGL ES: GL_RG16UI R16G16_UINT = 58, // * // * Corresponding formats: // * Vulkan: VK_FORMAT_R10X6G10X6B10X6A10X6_UNORM_4PACK16 // * OpenGL ES: N/A R10G10B10A10_UNORM = 59, }
* * Buffer pixel formats.
Related Procedures With Parameters
Related Procedures With Returns
AHardwareBuffer_Plane ¶
AHardwareBuffer_Plane :: struct { data: rawptr, // /< Points to first byte in plane pixelStride: u32, // /< Distance in bytes from the color channel of one pixel to the next rowStride: u32, }
* * Holds data for a single image plane.
AHardwareBuffer_Planes ¶
AHardwareBuffer_Planes :: struct { planeCount: u32, // /< Number of distinct planes planes: [4]AHardwareBuffer_Plane, }
* * Holds all image planes that contain the pixel data.
Related Procedures With Parameters
AHardwareBuffer_UsageFlags ¶
AHardwareBuffer_UsageFlags :: enum u64 { // * // * The buffer will never be locked for direct CPU reads using the // * AHardwareBuffer_lock() function. Note that reading the buffer // * using OpenGL or Vulkan functions or memory mappings is still // * allowed. CPU_READ_NEVER = 0, // * // * The buffer will sometimes be locked for direct CPU reads using // * the AHardwareBuffer_lock() function. Note that reading the // * buffer using OpenGL or Vulkan functions or memory mappings // * does not require the presence of this flag. CPU_READ_RARELY = 2, // * // * The buffer will often be locked for direct CPU reads using // * the AHardwareBuffer_lock() function. Note that reading the // * buffer using OpenGL or Vulkan functions or memory mappings // * does not require the presence of this flag. CPU_READ_OFTEN = 3, // CPU read value mask. CPU_READ_MASK = 15, // * // * The buffer will never be locked for direct CPU writes using the // * AHardwareBuffer_lock() function. Note that writing the buffer // * using OpenGL or Vulkan functions or memory mappings is still // * allowed. CPU_WRITE_NEVER = 0, // * // * The buffer will sometimes be locked for direct CPU writes using // * the AHardwareBuffer_lock() function. Note that writing the // * buffer using OpenGL or Vulkan functions or memory mappings // * does not require the presence of this flag. CPU_WRITE_RARELY = 32, // * // * The buffer will often be locked for direct CPU writes using // * the AHardwareBuffer_lock() function. Note that writing the // * buffer using OpenGL or Vulkan functions or memory mappings // * does not require the presence of this flag. CPU_WRITE_OFTEN = 48, // CPU write value mask. CPU_WRITE_MASK = 240, // The buffer will be read from by the GPU as a texture. GPU_SAMPLED_IMAGE = 256, // The buffer will be written to by the GPU as a framebuffer attachment. GPU_FRAMEBUFFER = 512, // * // * The buffer will be written to by the GPU as a framebuffer // * attachment. // * // * Note that the name of this flag is somewhat misleading: it does // * not imply that the buffer contains a color format. A buffer with // * depth or stencil format that will be used as a framebuffer // * attachment should also have this flag. Use the equivalent flag // * AHARDWAREBUFFER_USAGE_GPU_FRAMEBUFFER to avoid this confusion. GPU_COLOR_OUTPUT = 512, // * // * The buffer will be used as a composer HAL overlay layer. // * // * This flag is currently only needed when using ASurfaceTransaction_setBuffer // * to set a buffer. In all other cases, the framework adds this flag // * internally to buffers that could be presented in a composer overlay. // * ASurfaceTransaction_setBuffer is special because it uses buffers allocated // * directly through AHardwareBuffer_allocate instead of buffers allocated // * by the framework. COMPOSER_OVERLAY = 2048, // * // * The buffer is protected from direct CPU access or being read by // * non-secure hardware, such as video encoders. // * // * This flag is incompatible with CPU read and write flags. It is // * mainly used when handling DRM video. Refer to the EGL extension // * EGL_EXT_protected_content and GL extension // * GL_EXT_protected_textures for more information on how these // * buffers are expected to behave. PROTECTED_CONTENT = 16384, // The buffer will be read by a hardware video encoder. VIDEO_ENCODE = 65536, // * // * The buffer will be used for direct writes from sensors. // * When this flag is present, the format must be AHARDWAREBUFFER_FORMAT_BLOB. SENSOR_DIRECT_DATA = 8388608, // * // * The buffer will be used as a shader storage or uniform buffer object. // * When this flag is present, the format must be AHARDWAREBUFFER_FORMAT_BLOB. GPU_DATA_BUFFER = 16777216, // * // * The buffer will be used as a cube map texture. // * When this flag is present, the buffer must have a layer count // * that is a multiple of 6. Note that buffers with this flag must be // * bound to OpenGL textures using the extension // * GL_EXT_EGL_image_storage instead of GL_KHR_EGL_image. GPU_CUBE_MAP = 33554432, // * // * The buffer contains a complete mipmap hierarchy. // * Note that buffers with this flag must be bound to OpenGL textures using // * the extension GL_EXT_EGL_image_storage instead of GL_KHR_EGL_image. GPU_MIPMAP_COMPLETE = 67108864, // * // * Usage: The buffer is used for front-buffer rendering. When // * front-buffering rendering is specified, different usages may adjust their // * behavior as a result. For example, when used as GPU_COLOR_OUTPUT the buffer // * will behave similar to a single-buffered window. When used with // * COMPOSER_OVERLAY, the system will try to prioritize the buffer receiving // * an overlay plane & avoid caching it in intermediate composition buffers. FRONT_BUFFER = 4294967296, VENDOR_0 = 268435456, VENDOR_1 = 536870912, VENDOR_2 = 1073741824, VENDOR_3 = 2147483648, VENDOR_4 = 281474976710656, VENDOR_5 = 562949953421312, VENDOR_6 = 1125899906842624, VENDOR_7 = 2251799813685248, VENDOR_8 = 4503599627370496, VENDOR_9 = 9007199254740992, VENDOR_10 = 18014398509481984, VENDOR_11 = 36028797018963968, VENDOR_12 = 72057594037927936, VENDOR_13 = 144115188075855872, VENDOR_14 = 288230376151711744, VENDOR_15 = 576460752303423488, VENDOR_16 = 1152921504606846976, VENDOR_17 = 2305843009213693952, VENDOR_18 = 4611686018427387904, VENDOR_19 = 9223372036854775808, }
* * Buffer usage flags, specifying how the buffer will be accessed.
Related Procedures With Parameters
AHdrMetadataType ¶
AHdrMetadataType :: enum u32 { HDR10_SMPTE2086 = 1, HDR10_CTA861_3 = 2, HDR10PLUS_SEI = 3, }
* * HDR metadata standards that are supported by Android.
AHdrMetadata_cta861_3 ¶
* * CTA 861.3 "HDR Static Metadata Extension" static metadata
Related Procedures With Parameters
AHdrMetadata_smpte2086 ¶
AHdrMetadata_smpte2086 :: struct { displayPrimaryRed: AColor_xy, displayPrimaryGreen: AColor_xy, displayPrimaryBlue: AColor_xy, whitePoint: AColor_xy, maxLuminance: f32, minLuminance: f32, }
* * SMPTE ST 2086 "Mastering Display Color Volume" static metadata
Related Procedures With Parameters
AHeadTrackerEvent ¶
AHeadTrackerEvent :: struct { // * // * The fields rx, ry, rz are an Euler vector (rotation vector, i.e. a vector // * whose direction indicates the axis of rotation and magnitude indicates // * the angle to rotate around that axis) representing the transform from // * the (arbitrary, possibly slowly drifting) reference frame to the // * head frame. Expressed in radians. Magnitude of the vector must be // * in the range [0, pi], while the value of individual axes are // * in the range [-pi, pi]. rx: f32, ry: f32, rz: f32, // * // * The fields vx, vy, vz are an Euler vector (rotation vector) representing // * the angular velocity of the head (relative to itself), in radians per // * second. The direction of this vector indicates the axis of rotation, and // * the magnitude indicates the rate of rotation. vx: f32, vy: f32, vz: f32, // * // * This value changes each time the reference frame is suddenly and // * significantly changed, for example if an orientation filter algorithm // * used for determining the orientation has had its state reset. discontinuity_count: i32, }
AHeadingEvent ¶
AHeadingEvent :: struct { // * // * The direction in which the device is pointing relative to true north in // * degrees. The value must be between 0.0 (inclusive) and 360.0 (exclusive), // * with 0 indicating north, 90 east, 180 south, and 270 west. heading: f32, // * // * Accuracy is defined at 68% confidence. In the case where the underlying // * distribution is assumed Gaussian normal, this would be considered one // * standard deviation. For example, if the heading returns 60 degrees, and // * accuracy returns 10 degrees, then there is a 68 percent probability of // * the true heading being between 50 degrees and 70 degrees. accuracy: f32, }
AHeartRateEvent ¶
AHeartRateEvent :: struct { bpm: f32, status: SensorStatus, }
AImageDecoder ¶
AImageDecoder :: struct {}
* * Opaque handle for decoding images. * * Introduced in API 30 * * Create using one of the following: * - {@link AImageDecoder_createFromAAsset} * - {@link AImageDecoder_createFromFd} * - {@link AImageDecoder_createFromBuffer} * * After creation, {@link AImageDecoder_getHeaderInfo} can be used to retrieve * information about the encoded image. Other functions, like * {@link AImageDecoder_setTargetSize}, can be used to specify how to decode, and * {@link AImageDecoder_decodeImage} will decode into client provided memory. * * {@link AImageDecoder} objects are NOT thread-safe, and should not be shared across * threads.
Related Procedures With Parameters
- AImageDecoder_advanceFrame
- AImageDecoder_computeSampledSize
- AImageDecoder_decodeImage
- AImageDecoder_delete
- AImageDecoder_getFrameInfo
- AImageDecoder_getHeaderInfo
- AImageDecoder_getMinimumStride
- AImageDecoder_getRepeatCount
- AImageDecoder_isAnimated
- AImageDecoder_rewind
- AImageDecoder_setAndroidBitmapFormat
- AImageDecoder_setCrop
- AImageDecoder_setDataSpace
- AImageDecoder_setInternallyHandleDisposePrevious
- AImageDecoder_setTargetSize
- AImageDecoder_setUnpremultipliedRequired
AImageDecoderBlendOp ¶
AImageDecoderBlendOp :: enum i32 { // / This frame replaces existing content. This corresponds // / to webp’s “do not blend”. SRC = 1, // / This frame blends with the previous frame. SRC_OVER = 2, }
* How a frame is blended with the previous frame. * Introduced in API 31. * This, along with other information in AImageDecoderFrameInfo, can be useful for determining whether a frame is independent, but the decoder handles blending frames, so a simple sequential client does not need this.
AImageDecoderFrameDisposalOp ¶
AImageDecoderFrameDisposalOp :: enum i32 { // / No disposal. The following frame will be drawn directly // / on top of this one. NONE = 1, // / The frame’s rectangle is cleared to transparent (by AImageDecoder) // / before decoding the next frame. BACKGROUND = 2, // / The frame’s rectangle is reverted to the prior frame before decoding // / the next frame. This is handled by AImageDecoder, unless // / {@link AImageDecoder_setInternallyHandleDisposePrevious} is set to false. PREVIOUS = 3, }
* How a frame is “disposed” before showing the next one. * Introduced in API 31. * This, along with other information in AImageDecoderFrameInfo, can be useful for determining whether a frame is independent, but the decoder handles disposing of frames, so a simple sequential client does not need this.
AImageDecoderFrameInfo ¶
AImageDecoderFrameInfo :: struct {}
* Opaque handle to animation information about a single frame. * Introduced in API 31 * The duration (retrieved with {@link AImageDecoderFrameInfo_getDuration}) is necessary for clients to display the animation at the proper speed. The other information is helpful for a client that wants to determine what frames are independent (or what frames they depend on), but is unnecessary for a simple client that wants to sequentially display all frames.
Related Procedures With Parameters
Related Procedures With Returns
AImageDecoderHeaderInfo ¶
AImageDecoderHeaderInfo :: struct {}
* Opaque handle for representing information about the encoded image. * Introduced in API 30 * Retrieved using {@link AImageDecoder_getHeaderInfo} and passed to methods like {@link AImageDecoderHeaderInfo_getWidth} and {@link AImageDecoderHeaderInfo_getHeight}.
Related Procedures With Parameters
Related Procedures With Returns
AImageDecoderRepeatCount ¶
AImageDecoderRepeatCount :: enum i32 { // * // * Reported by {@link AImageDecoder_getRepeatCount} if the // * animation should repeat forever. // * // * Introduced in API 31 INFINITE = 2147483647, }
AImageDecoderResult ¶
AImageDecoderResult :: enum i32 { // * // * Decoding was successful and complete. SUCCESS = 0, // * // * The input is incomplete. INCOMPLETE = -1, // * // * The input contained an error after decoding some lines. ERROR = -2, // * // * Could not convert. For example, attempting to decode an image with // * alpha to an opaque format. INVALID_CONVERSION = -3, // * // * The scale is invalid. It may have overflowed, or it may be incompatible // * with the current alpha setting. INVALID_SCALE = -4, // * // * Some other parameter is invalid. BAD_PARAMETER = -5, // * // * Input was invalid before decoding any pixels. INVALID_INPUT = -6, // * // * A seek was required and it failed. SEEK_ERROR = -7, // * // * Some other error. For example, an internal allocation failed. INTERNAL_ERROR = -8, // * // * AImageDecoder did not recognize the format. UNSUPPORTED_FORMAT = -9, // * // * The animation has reached the end. FINISHED = -01, // * // * This method cannot be called while the AImageDecoder is in its current // * state. For example, various setters (like {@link AImageDecoder_setTargetSize}) // * can only be called while the AImageDecoder is set to decode the first // * frame of an animation. This ensures that any blending and/or restoring // * prior frames works correctly. INVALID_STATE = -11, }
* * {@link AImageDecoder} functions result code. * * Introduced in API 30. * * Many functions will return this to indicate success * ({@link ANDROID_IMAGE_DECODER_SUCCESS}) or the reason for the failure. On * failure, any out-parameters should be considered uninitialized, except where * specified. Use {@link AImageDecoder_resultToString} for a readable * version of the result code.
Related Procedures With Parameters
Related Procedures With Returns
- AImageDecoder_advanceFrame
- AImageDecoder_computeSampledSize
- AImageDecoder_createFromAAsset
- AImageDecoder_createFromBuffer
- AImageDecoder_createFromFd
- AImageDecoder_decodeImage
- AImageDecoder_getFrameInfo
- AImageDecoder_rewind
- AImageDecoder_setAndroidBitmapFormat
- AImageDecoder_setCrop
- AImageDecoder_setDataSpace
- AImageDecoder_setTargetSize
- AImageDecoder_setUnpremultipliedRequired
AInputEvent ¶
AInputEvent :: struct {}
* * Input events. * * Input events are opaque structures. Use the provided accessors functions to * read their properties.
Related Procedures With Parameters
- AInputEvent_getDeviceId
- AInputEvent_getSource
- AInputEvent_getType
- AInputEvent_release
- AInputQueue_finishEvent
- AInputQueue_preDispatchEvent
- AKeyEvent_getAction
- AKeyEvent_getDownTime
- AKeyEvent_getEventTime
- AKeyEvent_getFlags
- AKeyEvent_getKeyCode
- AKeyEvent_getMetaState
- AKeyEvent_getRepeatCount
- AKeyEvent_getScanCode
- AMotionEvent_getAction
- AMotionEvent_getActionButton
- AMotionEvent_getAxisValue
- AMotionEvent_getButtonState
- AMotionEvent_getClassification
- AMotionEvent_getDownTime
- AMotionEvent_getEdgeFlags
- AMotionEvent_getEventTime
- AMotionEvent_getFlags
- AMotionEvent_getHistoricalAxisValue
- AMotionEvent_getHistoricalEventTime
- AMotionEvent_getHistoricalOrientation
- AMotionEvent_getHistoricalPressure
- AMotionEvent_getHistoricalRawX
- AMotionEvent_getHistoricalRawY
- AMotionEvent_getHistoricalSize
- AMotionEvent_getHistoricalToolMajor
- AMotionEvent_getHistoricalToolMinor
- AMotionEvent_getHistoricalTouchMajor
- AMotionEvent_getHistoricalTouchMinor
- AMotionEvent_getHistoricalX
- AMotionEvent_getHistoricalY
- AMotionEvent_getHistorySize
- AMotionEvent_getMetaState
- AMotionEvent_getOrientation
- AMotionEvent_getPointerCount
- AMotionEvent_getPointerId
- AMotionEvent_getPressure
- AMotionEvent_getRawX
- AMotionEvent_getRawY
- AMotionEvent_getSize
- AMotionEvent_getToolMajor
- AMotionEvent_getToolMinor
- AMotionEvent_getToolType
- AMotionEvent_getTouchMajor
- AMotionEvent_getTouchMinor
- AMotionEvent_getX
- AMotionEvent_getXOffset
- AMotionEvent_getXPrecision
- AMotionEvent_getY
- AMotionEvent_getYOffset
- AMotionEvent_getYPrecision
Related Procedures With Returns
AInputQueue ¶
AInputQueue :: struct {}
* * Input queue * * An input queue is the facility through which you retrieve input * events.
Related Procedures With Parameters
Related Procedures With Returns
ALimitedAxesImuUncalibratedEvent ¶
ALimitedAxesImuUncalibratedEvent :: struct { using _: struct #raw_union { uncalib: [3]f32, using _: struct { x_uncalib: f32, y_uncalib: f32, z_uncalib: f32, }, }, using _: struct #raw_union { bias: [3]f32, using _: struct { x_bias: f32, y_bias: f32, z_bias: f32, }, }, using _: struct #raw_union { supported: [3]f32, using _: struct { x_supported: f32, y_supported: f32, z_supported: f32, }, }, }
ALooper ¶
ALooper :: struct {}
* * ALooper * * A looper is the state tracking an event loop for a thread. * Loopers do not define event structures or other such things; rather * they are a lower-level facility to attach one or more discrete objects * listening for an event. An "event" here is simply data available on * a file descriptor: each attached object has an associated file descriptor, * and waiting for "events" means (internally) polling on all of these file * descriptors until one or more of them have data available. * * A thread can have only one ALooper associated with it.
Related Procedures With Parameters
Related Procedures With Returns
ALooperFdFlags ¶
ALooperFdFlags :: bit_set[ALooperFdFlagsBits; i32]
Related Procedures With Parameters
ALooperFdFlagsBits ¶
ALooperFdFlagsBits :: enum i32 { // * // * The file descriptor is available for read operations. INPUT = 0, // * // * The file descriptor is available for write operations. OUTPUT = 1, // * // * The file descriptor has encountered an error condition. // * // * The looper always sends notifications about errors; it is not necessary // * to specify this event flag in the requested event set. ERROR = 2, // * // * The file descriptor was hung up. // * For example, indicates that the remote end of a pipe or socket was closed. // * // * The looper always sends notifications about hangups; it is not necessary // * to specify this event flag in the requested event set. HANGUP = 3, // * // * The file descriptor is invalid. // * For example, the file descriptor was closed prematurely. // * // * The looper always sends notifications about invalid file descriptors; it is not necessary // * to specify this event flag in the requested event set. INVALID = 4, }
* * Flags for file descriptor events that a looper can monitor. * * These flag bits can be combined to monitor multiple events at once.
ALooperPollResult ¶
ALooperPollResult :: enum i32 { // * // * The poll was awoken using wake() before the timeout expired // * and no callbacks were executed and no other file descriptors were ready. WAKE = -1, // * // * Result from ALooper_pollOnce() and ALooper_pollAll(): // * One or more callbacks were executed. CALLBACK = -2, // * // * Result from ALooper_pollOnce() and ALooper_pollAll(): // * The timeout expired. TIMEOUT = -3, // * // * Result from ALooper_pollOnce() and ALooper_pollAll(): // * An error occurred. ERROR = -4, }
Result from ALooper_pollOnce() and ALooper_pollAll().
ALooper_callbackFunc ¶
ALooper_callbackFunc :: proc "c" (fd: i32, events: bit_set[ALooperFdFlagsBits; i32], data: rawptr) -> i32
* * For callback-based event loops, this is the prototype of the function * that is called when a file descriptor event occurs. * It is given the file descriptor it is associated with, * a bitmask of the poll events that were triggered (typically ALOOPER_EVENT_INPUT), * and the data pointer that was originally supplied. * * Implementations should return 1 to continue receiving callbacks, or 0 * to have this file descriptor and callback unregistered from the looper.
Related Procedures With Parameters
AMotionClassification ¶
AMotionClassification :: enum i32 { // * // * Classification constant: None. // * // * No additional information is available about the current motion event stream. NONE = 0, // * // * Classification constant: Ambiguous gesture. // * // * The user's intent with respect to the current event stream is not yet determined. Events // * starting in #AMOTION_EVENT_CLASSIFICATION_AMBIGUOUS_GESTURE will eventually resolve into // * either #AMOTION_EVENT_CLASSIFICATION_DEEP_PRESS or #AMOTION_EVENT_CLASSIFICATION_NONE. // * Gestural actions, such as scrolling, should be inhibited until the classification resolves // * to another value or the event stream ends. AMBIGUOUS_GESTURE = 1, // * // * Classification constant: Deep press. // * // * The current event stream represents the user intentionally pressing harder on the screen. // * This classification type should be used to accelerate the long press behaviour. DEEP_PRESS = 2, }
* * Constants that identify different gesture classification types.
Related Procedures With Returns
ANativeActivity ¶
ANativeActivity :: struct { // * // * Pointer to the callback function table of the native application. // * You can set the functions here to your own callbacks. The callbacks // * pointer itself here should not be changed, it is allocated and managed // * for you by the framework. callbacks: ^ANativeActivityCallbacks, // * // * The global handle on the process's Java VM. vm: ^^JNIInvokeInterface, // * // * JNI context for the main thread of the app. Note that this field // * can ONLY be used from the main thread of the process, that is, the // * thread that calls into the ANativeActivityCallbacks. env: ^^JNINativeInterface, // * // * The NativeActivity object handle. // * // * IMPORTANT NOTE: This member is mis-named. It should really be named // * 'activity' instead of 'clazz', since it's a reference to the // * NativeActivity instance created by the system for you. // * // * We unfortunately cannot change this without breaking NDK // * source-compatibility. clazz: jobject, // * // * Path to this application's internal data directory. internalDataPath: cstring, // * // * Path to this application's external (removable/mountable) data directory. externalDataPath: cstring, // * // * The platform's SDK version code. sdkVersion: i32, // * // * This is the native instance of the application. It is not used by // * the framework, but can be set by the application to its own instance // * state. instance: rawptr, // * // * Pointer to the Asset Manager instance for the application. The application // * uses this to access binary assets bundled inside its own .apk file. assetManager: ^AAssetManager, // * // * Available starting with Honeycomb: path to the directory containing // * the application's OBB files (if any). If the app doesn't have any // * OBB files, this directory may not exist. obbPath: cstring, }
* * This structure defines the native side of an android.app.NativeActivity. * It is created by the framework, and handed to the application's native * code as it is being launched.
Related Procedures With Parameters
ANativeActivityCallbacks ¶
ANativeActivityCallbacks :: struct { // * // * NativeActivity has started. See Java documentation for Activity.onStart() // * for more information. onStart: proc "c" (activity: ^ANativeActivity), // * // * NativeActivity has resumed. See Java documentation for Activity.onResume() // * for more information. onResume: proc "c" (activity: ^ANativeActivity), // * // * Framework is asking NativeActivity to save its current instance state. // * See Java documentation for Activity.onSaveInstanceState() for more // * information. The returned pointer needs to be created with malloc(), // * the framework will call free() on it for you. You also must fill in // * outSize with the number of bytes in the allocation. Note that the // * saved state will be persisted, so it can not contain any active // * entities (pointers to memory, file descriptors, etc). onSaveInstanceState: proc "c" (activity: ^ANativeActivity, outSize: ^uint) -> rawptr, // * // * NativeActivity has paused. See Java documentation for Activity.onPause() // * for more information. onPause: proc "c" (activity: ^ANativeActivity), // * // * NativeActivity has stopped. See Java documentation for Activity.onStop() // * for more information. onStop: proc "c" (activity: ^ANativeActivity), // * // * NativeActivity is being destroyed. See Java documentation for Activity.onDestroy() // * for more information. onDestroy: proc "c" (activity: ^ANativeActivity), // * // * Focus has changed in this NativeActivity's window. This is often used, // * for example, to pause a game when it loses input focus. onWindowFocusChanged: proc "c" (activity: ^ANativeActivity, hasFocus: i32), // * // * The drawing window for this native activity has been created. You // * can use the given native window object to start drawing. onNativeWindowCreated: proc "c" (activity: ^ANativeActivity, window: ^ANativeWindow), // * // * The drawing window for this native activity has been resized. You should // * retrieve the new size from the window and ensure that your rendering in // * it now matches. onNativeWindowResized: proc "c" (activity: ^ANativeActivity, window: ^ANativeWindow), // * // * The drawing window for this native activity needs to be redrawn. To avoid // * transient artifacts during screen changes (such resizing after rotation), // * applications should not return from this function until they have finished // * drawing their window in its current state. onNativeWindowRedrawNeeded: proc "c" (activity: ^ANativeActivity, window: ^ANativeWindow), // * // * The drawing window for this native activity is going to be destroyed. // * You MUST ensure that you do not touch the window object after returning // * from this function: in the common case of drawing to the window from // * another thread, that means the implementation of this callback must // * properly synchronize with the other thread to stop its drawing before // * returning from here. onNativeWindowDestroyed: proc "c" (activity: ^ANativeActivity, window: ^ANativeWindow), // * // * The input queue for this native activity's window has been created. // * You can use the given input queue to start retrieving input events. onInputQueueCreated: proc "c" (activity: ^ANativeActivity, queue: ^AInputQueue), // * // * The input queue for this native activity's window is being destroyed. // * You should no longer try to reference this object upon returning from this // * function. onInputQueueDestroyed: proc "c" (activity: ^ANativeActivity, queue: ^AInputQueue), // * // * The rectangle in the window in which content should be placed has changed. onContentRectChanged: proc "c" (activity: ^ANativeActivity, #by_ptr rect: ARect), // * // * The current device AConfiguration has changed. The new configuration can // * be retrieved from assetManager. onConfigurationChanged: proc "c" (activity: ^ANativeActivity), // * // * The system is running low on memory. Use this callback to release // * resources you do not need, to help the system avoid killing more // * important processes. onLowMemory: proc "c" (activity: ^ANativeActivity), }
* * These are the callbacks the framework makes into a native application. * All of these callbacks happen on the main thread of the application. * By default, all callbacks are NULL; set to a pointer to your own function * to have it called.
ANativeActivity_createFunc ¶
ANativeActivity_createFunc :: proc "c" (activity: ^ANativeActivity, savedState: rawptr, savedStateSize: uint)
* * This is the function that must be in the native code to instantiate the * application's native activity. It is called with the activity instance (see * above); if the code is being instantiated from a previously saved instance, * the savedState will be non-NULL and point to the saved data. You must make * any copy of this data you need -- it will be released after you return from * this function.
ANativeWindow ¶
ANativeWindow :: struct {}
* * Opaque type that provides access to a native window. * * A pointer can be obtained using {@link ANativeWindow_fromSurface()}.
Related Procedures With Parameters
- ANativeWindow_acquire
- ANativeWindow_clearFrameRate
- ANativeWindow_getBuffersDataSpace
- ANativeWindow_getBuffersDefaultDataSpace
- ANativeWindow_getFormat
- ANativeWindow_getHeight
- ANativeWindow_getWidth
- ANativeWindow_lock
- ANativeWindow_release
- ANativeWindow_setBuffersDataSpace
- ANativeWindow_setBuffersGeometry
- ANativeWindow_setBuffersTransform
- ANativeWindow_setFrameRate
- ANativeWindow_setFrameRateWithChangeStrategy
- ANativeWindow_toSurface
- ANativeWindow_tryAllocateBuffers
- ANativeWindow_unlockAndPost
- ASurfaceControl_createFromWindow
Related Procedures With Returns
ANativeWindowTransform ¶
ANativeWindowTransform :: enum i32 { IDENTITY = 0, MIRROR_HORIZONTAL = 1, MIRROR_VERTICAL = 2, ROTATE_90 = 4, ROTATE_180 = 3, ROTATE_270 = 7, }
* * Transforms that can be applied to buffers as they are displayed to a window. * * Supported transforms are any combination of horizontal mirror, vertical * mirror, and clockwise 90 degree rotation, in that order. Rotations of 180 * and 270 degrees are made up of those basic transforms.
Related Procedures With Parameters
ANativeWindow_Buffer ¶
ANativeWindow_Buffer :: struct { // / The number of pixels that are shown horizontally. width: i32, // / The number of pixels that are shown vertically. height: i32, // / The number of *pixels* that a line in the buffer takes in // / memory. This may be >= width. stride: i32, // / The format of the buffer. One of AHardwareBuffer_Format. format: AHardwareBuffer_Format, // / The actual bits. bits: rawptr, // / Do not touch. reserved: [6]u32, }
* * Struct that represents a windows buffer. * * A pointer can be obtained using {@link ANativeWindow_lock()}.
Related Procedures With Parameters
ANativeWindow_ChangeFrameRateStrategy ¶
ANativeWindow_ChangeFrameRateStrategy :: enum i8 { // * // * Change the frame rate only if the transition is going to be seamless. ONLY_IF_SEAMLESS = 0, // * // * Change the frame rate even if the transition is going to be non-seamless, // * i.e. with visual interruptions for the user. ALWAYS = 1, }
Change frame rate strategy value for ANativeWindow_setFrameRate. Available since API level 31.
Related Procedures With Parameters
ANativeWindow_FrameRateCompatibility ¶
ANativeWindow_FrameRateCompatibility :: enum i8 { // * // * There are no inherent restrictions on the frame rate of this window. When // * the system selects a frame rate other than what the app requested, the // * app will be able to run at the system frame rate without requiring pull // * down. This value should be used when displaying game content, UIs, and // * anything that isn't video. DEFAULT = 0, // * // * This window is being used to display content with an inherently fixed // * frame rate, e.g.\ a video that has a specific frame rate. When the system // * selects a frame rate other than what the app requested, the app will need // * to do pull down or use some other technique to adapt to the system's // * frame rate. The user experience is likely to be worse (e.g. more frame // * stuttering) than it would be if the system had chosen the app's requested // * frame rate. This value should be used for video content. FIXED_SOURCE = 1, }
Compatibility value for ANativeWindow_setFrameRate.
Related Procedures With Parameters
ANativeWindow_LegacyFormat ¶
ANativeWindow_LegacyFormat :: enum u32 { // Red: 8 bits, Green: 8 bits, Blue: 8 bits, Alpha: 8 bits. * RGBA_8888 = 1, // Red: 8 bits, Green: 8 bits, Blue: 8 bits, Unused: 8 bits. * RGBX_8888 = 2, // Red: 5 bits, Green: 6 bits, Blue: 5 bits. * RGB_565 = 4, }
* * Legacy window pixel format names, kept for backwards compatibility. New code and APIs should use AHARDWAREBUFFER_FORMAT_.
ANeuralNetworksBurst ¶
ANeuralNetworksBurst :: struct {}
* * ANeuralNetworksBurst is an opaque type that can be used to reduce the latency * of a rapid sequence of executions. It will likely cause overhead if only used * for a single execution. * * ANeuralNetworksBurst serves as a context object for any number of inferences * using {@link ANeuralNetworksExecution} objects. An ANeuralNetworksBurst * object and the {@link ANeuralNetworksExecution} objects used with it must all * have been created from the same {@link ANeuralNetworksCompilation} object. * * This object is also used as a hint to drivers, providing insight to the * lifetime of a rapid sequence of executions. For example, a driver may choose * to increase the clock frequency of its accelerator for the lifetime of a * burst object. * * <p>To use:<ul> * <li>Create a new burst object by calling the * {@link ANeuralNetworksBurst_create} function.</li> * <li>For each execution:</li><ul> * <li>Create {@link ANeuralNetworksExecution} and configure its * properties (see {@link ANeuralNetworksExecution} for details).</li> * <li>Apply the model synchronously with * {@link ANeuralNetworksExecution_burstCompute}, reusing the same * {@link ANeuralNetworksBurst} with the new * {@link ANeuralNetworksExecution}.</li> * <li>Use and free the {@link ANeuralNetworksExecution}.</li></ul> * <li>Destroy the burst with * {@link ANeuralNetworksBurst_free}.</li></ul></p> * * Available since NNAPI feature level 3.
Related Procedures With Parameters
ANeuralNetworksCompilation ¶
ANeuralNetworksCompilation :: struct {}
* * ANeuralNetworksCompilation is an opaque type that can be used to compile * a machine learning model. * * <p>To use:<ul> * <li>Create a new compilation instance by calling the * {@link ANeuralNetworksCompilation_create} function or * {@link ANeuralNetworksCompilation_createForDevices}.</li> * <li>Set any desired properties on the compilation (for example, * {@link ANeuralNetworksCompilation_setPreference}).</li> * <li>Optionally, set the caching signature and the cache directory on the * compilation by calling {@link ANeuralNetworksCompilation_setCaching}.</li> * <li>Complete the compilation with {@link ANeuralNetworksCompilation_finish}.</li> * <li>Use the compilation as many times as needed * with {@link ANeuralNetworksExecution_create} and * {@link ANeuralNetworksBurst_create}.</li> * <li>Destroy the compilation with {@link ANeuralNetworksCompilation_free} * once all executions using the compilation have completed.</li></ul></p> * * A compilation is completed by calling {@link ANeuralNetworksCompilation_finish}. * A compilation is destroyed by calling {@link ANeuralNetworksCompilation_free}. * * <p>A compilation cannot be modified once {@link ANeuralNetworksCompilation_finish} * has been called on it.</p> * * <p>It is the application's responsibility to make sure that only * one thread modifies a compilation at a given time. It is however * safe for more than one thread to use the compilation once * {@link ANeuralNetworksCompilation_finish} has returned.</p> * * <p>It is also the application's responsibility to ensure that there are no other * uses of the compilation after calling {@link ANeuralNetworksCompilation_free}. * This includes any execution object or burst object created using the compilation, * or any memory descriptor with the compilation as part of one of the roles specified by * {@link ANeuralNetworksMemoryDesc_addInputRole} or * {@link ANeuralNetworksMemoryDesc_addOutputRole}.</p> * * Available since NNAPI feature level 1.
Related Procedures With Parameters
- ANeuralNetworksBurst_create
- ANeuralNetworksCompilation_finish
- ANeuralNetworksCompilation_free
- ANeuralNetworksCompilation_getPreferredMemoryAlignmentForInput
- ANeuralNetworksCompilation_getPreferredMemoryAlignmentForOutput
- ANeuralNetworksCompilation_getPreferredMemoryPaddingForInput
- ANeuralNetworksCompilation_getPreferredMemoryPaddingForOutput
- ANeuralNetworksCompilation_setCaching
- ANeuralNetworksCompilation_setPreference
- ANeuralNetworksCompilation_setPriority
- ANeuralNetworksCompilation_setTimeout
- ANeuralNetworksExecution_create
- ANeuralNetworksMemoryDesc_addInputRole
- ANeuralNetworksMemoryDesc_addOutputRole
ANeuralNetworksDevice ¶
ANeuralNetworksDevice :: struct {}
* * ANeuralNetworksDevice is an opaque type that represents a device. * * This type is used to query basic properties and supported operations of the corresponding * device, and control which device(s) a model is to be run on. * * Available since NNAPI feature level 3.
Related Procedures With Parameters
ANeuralNetworksEvent ¶
ANeuralNetworksEvent :: struct {}
* * ANeuralNetworksEvent is an opaque type that represents an event * that will be signaled once an execution completes. * * Available since NNAPI feature level 1.
Related Procedures With Parameters
ANeuralNetworksExecution ¶
ANeuralNetworksExecution :: struct {}
* * ANeuralNetworksExecution is an opaque type that can be used to apply a machine * learning model to a set of inputs. * * <p>To use:<ul> * <li>Create a new execution instance by calling the * {@link ANeuralNetworksExecution_create} function.</li> * <li>Associate input buffers or memory regions to the model inputs with * {@link ANeuralNetworksExecution_setInput} or * {@link ANeuralNetworksExecution_setInputFromMemory}.</li> * <li>Associate output buffers or memory regions to the model outputs with * {@link ANeuralNetworksExecution_setOutput} or * {@link ANeuralNetworksExecution_setOutputFromMemory}.</li> * <li>Optionally, configure the execution with * {@link ANeuralNetworksExecution_setLoopTimeout}, * {@link ANeuralNetworksExecution_setMeasureTiming}, * {@link ANeuralNetworksExecution_setReusable}, or * {@link ANeuralNetworksExecution_setTimeout}. * <li>Apply the model with one of the following:</li><ul> * <li>Asynchronously with {@link ANeuralNetworksExecution_startCompute} * or with {@link ANeuralNetworksExecution_startComputeWithDependencies}, * waiting for the execution to complete with * {@link ANeuralNetworksEvent_wait}.</li> * <li>Synchronously with {@link ANeuralNetworksExecution_compute}.</li> * <li>Synchronously as part of an execution burst with * {@link ANeuralNetworksExecution_burstCompute}.</li></ul> * If the execution has been marked as reusable, then you can * apply the model more than once. * <li>Destroy the execution with * {@link ANeuralNetworksExecution_free}.</li></ul></p> * * <p>An output buffer or memory region must not overlap with any * other output buffer or memory region, with an input buffer or * memory region, or with an operand value in a memory object * ({@link ANeuralNetworksModel_setOperandValueFromMemory}).</p> * * <p>An execution is in the preparation state after it is created by * {@link ANeuralNetworksExecution_create}. An execution may only be modified in the preparation * state. Scheduling a computation by calling {@link ANeuralNetworksExecution_burstCompute}, * {@link ANeuralNetworksExecution_compute}, {@link ANeuralNetworksExecution_startCompute}, * or {@link ANeuralNetworksExecution_startComputeWithDependencies} will change the state of * the execution object to the computation state. When the computation completes, the state of * the execution object will change from the computation state to the completed state. * The computation is completed when {@link ANeuralNetworksExecution_compute}, * {@link ANeuralNetworksExecution_burstCompute}, or {@link ANeuralNetworksEvent_wait} * has returned.</p> * * <p>An execution can be applied to a model with * {@link ANeuralNetworksExecution_burstCompute}, * {@link ANeuralNetworksExecution_compute}, * {@link ANeuralNetworksExecution_startCompute} or * {@link ANeuralNetworksExecution_startComputeWithDependencies} only once. Create new * executions to do new evaluations of the model.</p> * * <p>Starting at NNAPI feature level 5, the application may call * {@link ANeuralNetworksExecution_setReusable} to set an execution to be reusable for multiple * computations. The application may schedule and evaluate a computation again from the completed * state of a reusable execution. The execution cannot be modified between computations.</p> * * <p>It is the application's responsibility to make sure that only one thread * modifies an execution at a given time. It is however safe for more than one * thread to use {@link ANeuralNetworksEvent_wait} at the same time.</p> * * <p>It is also the application's responsibility to ensure that the execution * either has never been scheduled or has completed (i.e., that * {@link ANeuralNetworksExecution_burstCompute}, * {@link ANeuralNetworksExecution_compute}, or * {@link ANeuralNetworksEvent_wait} has returned) before calling * {@link ANeuralNetworksExecution_free}.</p>. * * <p>It is also the application's responsibility to ensure that there are no other * uses of the execution after calling {@link ANeuralNetworksExecution_free}.</p> * * <p>It is the application's responsibility to ensure that there are no concurrent computations * scheduled and evaluated on the same execution, either by means of * {@link ANeuralNetworksExecution_compute} or * {@link ANeuralNetworksExecution_burstCompute} (which are synchronous) * in different threads, or by means of * {@link ANeuralNetworksExecution_startCompute} or * {@link ANeuralNetworksExecution_startComputeWithDependencies} (which are asynchronous). * It is however safe to schedule and evaluate multiple computations on different executions * concurrently. (Concurrent uses of {@link ANeuralNetworksExecution_burstCompute} must be on * different burst objects.) The runtime makes no guarantee on the ordering of * completion of executions. If it's important to the application, the * application should enforce the ordering by ensuring that one execution * completes before the next is scheduled (for example, by scheduling all * executions synchronously within a single thread, or by scheduling all * executions asynchronously and using {@link ANeuralNetworksEvent_wait} between * calls to {@link ANeuralNetworksExecution_startCompute}); or by using * {@link ANeuralNetworksExecution_startComputeWithDependencies} to make the execution wait for a * list of events to be signaled before starting the actual evaluation.</p> * * Available since NNAPI feature level 1.
Related Procedures With Parameters
- ANeuralNetworksExecution_burstCompute
- ANeuralNetworksExecution_compute
- ANeuralNetworksExecution_enableInputAndOutputPadding
- ANeuralNetworksExecution_free
- ANeuralNetworksExecution_getDuration
- ANeuralNetworksExecution_getOutputOperandDimensions
- ANeuralNetworksExecution_getOutputOperandRank
- ANeuralNetworksExecution_setInput
- ANeuralNetworksExecution_setInputFromMemory
- ANeuralNetworksExecution_setLoopTimeout
- ANeuralNetworksExecution_setMeasureTiming
- ANeuralNetworksExecution_setOutput
- ANeuralNetworksExecution_setOutputFromMemory
- ANeuralNetworksExecution_setReusable
- ANeuralNetworksExecution_setTimeout
- ANeuralNetworksExecution_startCompute
- ANeuralNetworksExecution_startComputeWithDependencies
ANeuralNetworksMemory ¶
ANeuralNetworksMemory :: struct {}
* * ANeuralNetworksMemory is an opaque type that represents memory. * * This type is used to represent shared memory, memory mapped files, * and similar memories. * * By using shared memory, a program can efficiently communicate to the * runtime and drivers the tensors that define a model. See * {@link ANeuralNetworksModel_setOperandValueFromMemory}. An application * should typically create one shared memory object that contains every constant tensor * needed to define a model. {@link ANeuralNetworksMemory_createFromFd} can be used to * create shared memory from a file handle. * {@link ANeuralNetworksMemory_createFromAHardwareBuffer} can be used to * create shared memory from an AHardwareBuffer handle. * * Memory objects can also be used to specify the input and output arguments of * an execution. See {@link ANeuralNetworksExecution_setInputFromMemory} * and {@link ANeuralNetworksExecution_setOutputFromMemory}. * * When calling {@link ANeuralNetworksModel_setOperandValueFromMemory}, * {@link ANeuralNetworksExecution_setInputFromMemory} and * {@link ANeuralNetworksExecution_setOutputFromMemory}, each operand in the shared * memory object must be aligned on a boundary of a byte size that is a multiple * of the element type byte size, e.g., a tensor with * {@link ANEURALNETWORKS_TENSOR_FLOAT32} type must be aligned on 4-byte boundary. * * It is the application's responsibility to ensure that there are no uses of * the memory after calling {@link ANeuralNetworksMemory_free}. This includes * any model which references this memory because of a call to * {@link ANeuralNetworksModel_setOperandValueFromMemory}, any compilation * created using such a model, any execution object or burst object created * using such a compilation, or any execution which references this memory * because of a call to {@link ANeuralNetworksExecution_setInputFromMemory} or * {@link ANeuralNetworksExecution_setOutputFromMemory}. * * Available since NNAPI feature level 1. * * Starting at NNAPI feature level 4, the application may request creation of device native memory * from {@link ANeuralNetworksMemoryDesc} to avoid potential memory copying and transformation * overhead between executions. See also {@link ANeuralNetworksMemoryDesc} and * {@link ANeuralNetworksMemory_createFromDesc}.
ANeuralNetworksMemoryDesc ¶
ANeuralNetworksMemoryDesc :: struct {}
* * ANeuralNetworksMemoryDesc is an opaque type that represents a memory descriptor. * * A memory descriptor describes the properties of a memory object, and is used by * {@link ANeuralNetworksMemory_createFromDesc}. * * To use: * - Create a new memory descriptor by calling {@link ANeuralNetworksMemoryDesc_create}. * - Specify all of the intended input and output roles by calling * {@link ANeuralNetworksMemoryDesc_addInputRole} and * {@link ANeuralNetworksMemoryDesc_addOutputRole}. * - Optionally, specify the memory dimensions by calling * {@link ANeuralNetworksMemoryDesc_setDimensions}. * - Complete the memory descriptor with {@link ANeuralNetworksMemoryDesc_finish}. * - Use the memory descriptor as many times as needed with * {@link ANeuralNetworksMemory_createFromDesc}. * - Destroy the memory descriptor with {@link ANeuralNetworksMemoryDesc_free}. * * A memory descriptor is completed by calling {@link ANeuralNetworksMemoryDesc_finish}. * A memory descriptor is destroyed by calling {@link ANeuralNetworksMemoryDesc_free}. * * A memory descriptor must not be modified once {@link ANeuralNetworksMemoryDesc_finish} * has been called on it. * * It is the application's responsibility to make sure that only * one thread modifies a memory descriptor at a given time. It is however * safe for more than one thread to use the memory descriptor once * {@link ANeuralNetworksMemoryDesc_finish} has returned. * * It is also the application's responsibility to ensure that there are no other * uses of the memory descriptor after calling {@link ANeuralNetworksMemoryDesc_free}. * It is however safe to continue using a {@link ANeuralNetworksMemory} object created * from the memory descriptor. * * Available since NNAPI feature level 4.
ANeuralNetworksModel ¶
ANeuralNetworksModel :: struct {}
* * ANeuralNetworksModel is an opaque type that contains a description of the * mathematical operations that constitute the model. * * <p>Build the model by calling<ul> * <li>{@link ANeuralNetworksModel_create}</li> * <li>{@link ANeuralNetworksModel_addOperation}</li> * <li>{@link ANeuralNetworksModel_addOperand}</li> * </ul> * * This forms a graph in which each operation and operand is a node, a * directed edge from an operand to an operation indicates that the * operand is an input to the operation, and a directed edge from an * operation to an operand indicates that the operand is an output * from the operation. This graph must be acyclic. * * A model is completed by calling {@link ANeuralNetworksModel_finish}. * A model is destroyed by calling {@link ANeuralNetworksModel_free}. * * <p>A model cannot be modified once {@link ANeuralNetworksModel_finish} * has been called on it.</p> * * <p>It is the application's responsibility to make sure that only one thread * modifies a model at a given time. It is however safe for more than one * thread to use the model once {@link ANeuralNetworksModel_finish} has returned.</p> * * <p>It is also the application's responsibility to ensure that there are no * other uses of the model after calling {@link ANeuralNetworksModel_free}. * This includes any compilation, execution object or burst object created using * the model.</p> * * Available since NNAPI feature level 1.
Related Procedures With Parameters
- ANeuralNetworksCompilation_create
- ANeuralNetworksCompilation_createForDevices
- ANeuralNetworksModel_addOperand
- ANeuralNetworksModel_addOperation
- ANeuralNetworksModel_finish
- ANeuralNetworksModel_free
- ANeuralNetworksModel_getSupportedOperationsForDevices
- ANeuralNetworksModel_identifyInputsAndOutputs
- ANeuralNetworksModel_relaxComputationFloat32toFloat16
- ANeuralNetworksModel_setOperandSymmPerChannelQuantParams
- ANeuralNetworksModel_setOperandValue
- ANeuralNetworksModel_setOperandValueFromMemory
- ANeuralNetworksModel_setOperandValueFromModel
ANeuralNetworksOperandType ¶
ANeuralNetworksOperandType :: struct { // * // * The data type, e.g ANEURALNETWORKS_FLOAT32. type: OperandCode, // * // * The number of dimensions (rank). // * // * Must be 0 for scalars. dimensionCount: u32, // * // * The dimensions of the tensor. // * // * Must be nullptr for scalars. dimensions: [^]u32, // * // * The quantization scale. // * // * Must be 0 when not applicable to an operand type. // * // * See {@link OperandCode}. scale: f32, // * // * The quantization zero point. // * // * Must be 0 when not applicable to an operand type. // * // * See {@link OperandCode}. zeroPoint: i32, }
* * ANeuralNetworksOperandType describes the type of an operand. * * This structure is used to describe both scalars and tensors. * * A tensor operand type with all dimensions specified is "fully * specified". Whenever possible (i.e., whenever the dimensions are * known at model construction time), a tensor operand type should be * (but is not required to be) fully specified, in order to enable the * best possible performance. * * If a tensor operand's type is not fully specified, the dimensions * of the operand are deduced from the operand types and values of the * operation for which that operand is an output or from the corresponding * {@link ANEURALNETWORKS_IF} or {@link ANEURALNETWORKS_WHILE} operation input * operand type in the case of referenced model input operands. * * <p>In the following situations, a tensor operand type must be fully * specified:<ul> * <li>The operand has a constant value, set by * {@link ANeuralNetworksModel_setOperandValue} (with a * non-nullptr buffer) or * {@link ANeuralNetworksModel_setOperandValueFromMemory}.</li> * <li>The operand is a model input (see * {@link ANeuralNetworksModel_identifyInputsAndOutputs}) of the main * model within a compilation. A fully specified tensor operand type * must either be provided to {@link ANeuralNetworksModel_addOperand}; * or it must be provided to the corresponding * {@link ANeuralNetworksExecution_setInput}, or * {@link ANeuralNetworksExecution_setInputFromMemory}. * EXCEPTION: If the input is optional and omitted * (by passing nullptr for buffer to * {@link ANeuralNetworksExecution_setInput}) then it need * not have a fully specified tensor operand type.</li> * <li>The operand is a model output (see * {@link ANeuralNetworksModel_identifyInputsAndOutputs}) of the main * model within a compilation and is to be used with {@link * ANeuralNetworksExecution_startComputeWithDependencies}. * A fully specified tensor operand type must either be provided * to {@link ANeuralNetworksModel_addOperand}; or it must be * provided to the corresponding * {@link ANeuralNetworksExecution_setOutput}, or * {@link ANeuralNetworksExecution_setOutputFromMemory}.</li></ul> * * A tensor operand type of specified rank but some number of * unspecified dimensions is represented by setting dimensionCount to * the rank and each unspecified dimension to 0. * * Available since NNAPI feature level 1. * * Starting at NNAPI feature level 3, a tensor operand type of unspecified rank is * represented by setting dimensionCount to 0 and dimensions to NULL (just as if * it were a scalar operand type).
ANeuralNetworksOperationType ¶
ANeuralNetworksOperationType :: OperationCode
* * Aliasing to {@link OperationCode}, used in function * {@link ANeuralNetworksModel_addOperation}.
Related Procedures With Parameters
ANeuralNetworksSymmPerChannelQuantParams ¶
ANeuralNetworksSymmPerChannelQuantParams :: struct { // The index of the channel dimension. channelDim: u32, // The size of the scale array. Should be equal to dimension[channelDim] of the Operand. scaleCount: u32, // The array of scaling values for each channel. Each value must be greater than zero. scales: [^]f32, }
* * Parameters for ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL operand.
Related Procedures With Parameters
AObbInfo ¶
AObbInfo :: struct {}
{@link AObbInfo} is an opaque type representing information for obb storage.
Related Procedures With Parameters
Related Procedures With Returns
APerformanceHintManager ¶
APerformanceHintManager :: struct {}
* * An opaque type representing a handle to a performance hint manager. * It must be released after use. * * <p>To use:<ul> * <li>Obtain the performance hint manager instance by calling * {@link APerformanceHint_getManager} function.</li> * <li>Create an {@link APerformanceHintSession} with * {@link APerformanceHint_createSession}.</li> * <li>Get the preferred update rate in nanoseconds with * {@link APerformanceHint_getPreferredUpdateRateNanos}.</li>
Related Procedures With Parameters
Related Procedures With Returns
APerformanceHintSession ¶
APerformanceHintSession :: struct {}
* * An opaque type representing a handle to a performance hint session. * A session can only be acquired from a {@link APerformanceHintManager} * with {@link APerformanceHint_getPreferredUpdateRateNanos}. It must be * freed with {@link APerformanceHint_closeSession} after use. * * A Session represents a group of threads with an inter-related workload such that hints for * their performance should be considered as a unit. The threads in a given session should be * long-life and not created or destroyed dynamically. * * <p>Each session is expected to have a periodic workload with a target duration for each * cycle. The cycle duration is likely greater than the target work duration to allow other * parts of the pipeline to run within the available budget. For example, a renderer thread may * work at 60hz in order to produce frames at the display's frame but have a target work * duration of only 6ms.</p> * * <p>After each cycle of work, the client is expected to use * {@link APerformanceHint_reportActualWorkDuration} to report the actual time taken to * complete.</p> * * <p>To use:<ul> * <li>Update a sessions target duration for each cycle of work * with {@link APerformanceHint_updateTargetWorkDuration}.</li> * <li>Report the actual duration for the last cycle of work with * {@link APerformanceHint_reportActualWorkDuration}.</li> * <li>Release the session instance with * {@link APerformanceHint_closeSession}.</li></ul></p>
Related Procedures With Parameters
Related Procedures With Returns
ARect ¶
ARect :: struct { // / Minimum X coordinate of the rectangle. left: i32, // / Minimum Y coordinate of the rectangle. top: i32, // / Maximum X coordinate of the rectangle. right: i32, // / Maximum Y coordinate of the rectangle. bottom: i32, }
* * Rectangular window area. * * This is the NDK equivalent of the android.graphics.Rect class in Java. It is * used with {@link ANativeActivityCallbacks::onContentRectChanged} event * callback and the ANativeWindow_lock() function. * * In a valid ARect, left <= right and top <= bottom. ARect with left=0, top=10, * right=1, bottom=11 contains only one pixel at x=0, y=10.
Related Procedures With Parameters
Related Procedures With Returns
ASensor ¶
ASensor :: struct {}
* * {@link ASensor} is an opaque type that provides information about * an hardware sensors. * * A {@link ASensor} pointer can be obtained using * ASensorManager_getDefaultSensor(), * ASensorManager_getDefaultSensorEx() or from a {@link ASensorList}. * * This file provides a set of functions to access properties of a * {@link ASensor}: * - ASensor_getName() * - ASensor_getVendor() * - ASensor_getType() * - ASensor_getResolution() * - ASensor_getMinDelay() * - ASensor_getFifoMaxEventCount() * - ASensor_getFifoReservedEventCount() * - ASensor_getStringType() * - ASensor_getReportingMode() * - ASensor_isWakeUpSensor() * - ASensor_getHandle()
Related Procedures With Parameters
- ASensorEventQueue_disableSensor
- ASensorEventQueue_enableSensor
- ASensorEventQueue_registerSensor
- ASensorEventQueue_setEventRate
- ASensorManager_configureDirectReport
- ASensor_getFifoMaxEventCount
- ASensor_getFifoReservedEventCount
- ASensor_getHandle
- ASensor_getHighestDirectReportRateLevel
- ASensor_getMinDelay
- ASensor_getName
- ASensor_getReportingMode
- ASensor_getResolution
- ASensor_getStringType
- ASensor_getType
- ASensor_getVendor
- ASensor_isDirectChannelTypeSupported
- ASensor_isWakeUpSensor
Related Procedures With Returns
ASensorEvent ¶
ASensorEvent :: struct { version: i32, // sizeof(struct ASensorEvent) sensor: i32, // The sensor that generates this event type: SensorType, // Sensor type for the event, such as {@link ASENSOR_TYPE_ACCELEROMETER} reserved0: i32, // * // * The time in nanoseconds at which the event happened, and its behavior // * is identical to <a href="/reference/android/hardware/SensorEvent#timestamp"> // * SensorEvent::timestamp</a> in Java API. timestamp: i64, using _: struct #raw_union { using _: struct #raw_union { data: [16]f32, vector: ASensorVector, acceleration: ASensorVector, gyro: ASensorVector, magnetic: ASensorVector, temperature: f32, distance: f32, light: f32, pressure: f32, relative_humidity: f32, uncalibrated_acceleration: AUncalibratedEvent, uncalibrated_gyro: AUncalibratedEvent, uncalibrated_magnetic: AUncalibratedEvent, meta_data: AMetaDataEvent, heart_rate: AHeartRateEvent, dynamic_sensor_meta: ADynamicSensorEvent, additional_info: AAdditionalInfoEvent, head_tracker: AHeadTrackerEvent, limited_axes_imu: ALimitedAxesImuEvent, limited_axes_imu_uncalibrated: ALimitedAxesImuUncalibratedEvent, heading: AHeadingEvent, }, u64: struct #raw_union { data: [8]u64, step_counter: u64, }, }, flags: u32, reserved1: [3]i32, }
LINT.IfChange * * Information that describes a sensor event, refer to * <a href="/reference/android/hardware/SensorEvent">SensorEvent</a> for additional * documentation. * * NOTE: changes to this struct has to be backward compatible and reflected in * sensors_event_t
Related Procedures With Parameters
ASensorEventQueue ¶
ASensorEventQueue :: struct {}
* * {@link ASensorEventQueue} is an opaque type that provides access to * {@link ASensorEvent} from hardware sensors. * * A new {@link ASensorEventQueue} can be obtained using ASensorManager_createEventQueue(). * * This file provides a set of functions to enable and disable * sensors, check and get events, and set event rates on a {@link * ASensorEventQueue}. * - ASensorEventQueue_enableSensor() * - ASensorEventQueue_disableSensor() * - ASensorEventQueue_hasEvents() * - ASensorEventQueue_getEvents() * - ASensorEventQueue_setEventRate() * - ASensorEventQueue_requestAdditionalInfoEvents()
Related Procedures With Parameters
Related Procedures With Returns
ASensorList ¶
ASensorList :: [^]^ASensor
* * {@link ASensorList} is an array of reference to {@link ASensor}. * * A {@link ASensorList} can be initialized using ASensorManager_getSensorList().
Related Procedures With Parameters
ASensorManager ¶
ASensorManager :: struct {}
* * {@link ASensorManager} is an opaque type to manage sensors and * events queues. * * {@link ASensorManager} is a singleton that can be obtained using * ASensorManager_getInstance(). * * This file provides a set of functions that uses {@link * ASensorManager} to access and list hardware sensors, and * create and destroy event queues: * - ASensorManager_getSensorList() * - ASensorManager_getDefaultSensor() * - ASensorManager_getDefaultSensorEx() * - ASensorManager_createEventQueue() * - ASensorManager_destroyEventQueue()
Related Procedures With Parameters
- ASensorManager_configureDirectReport
- ASensorManager_createEventQueue
- ASensorManager_createHardwareBufferDirectChannel
- ASensorManager_createSharedMemoryDirectChannel
- ASensorManager_destroyDirectChannel
- ASensorManager_destroyEventQueue
- ASensorManager_getDefaultSensor
- ASensorManager_getDefaultSensorEx
- ASensorManager_getDynamicSensorList
- ASensorManager_getSensorList
Related Procedures With Returns
ASensorRef ¶
ASensorRef :: ^ASensor
* * {@link ASensorRef} is a type for constant pointers to {@link ASensor}. * * This is used to define entry in {@link ASensorList} arrays.
Related Procedures With Parameters
- ASensorEventQueue_disableSensor
- ASensorEventQueue_enableSensor
- ASensorEventQueue_registerSensor
- ASensorEventQueue_setEventRate
- ASensorManager_configureDirectReport
- ASensor_getFifoMaxEventCount
- ASensor_getFifoReservedEventCount
- ASensor_getHandle
- ASensor_getHighestDirectReportRateLevel
- ASensor_getMinDelay
- ASensor_getName
- ASensor_getReportingMode
- ASensor_getResolution
- ASensor_getStringType
- ASensor_getType
- ASensor_getVendor
- ASensor_isDirectChannelTypeSupported
- ASensor_isWakeUpSensor
Related Procedures With Returns
ASensorVector ¶
ASensorVector :: struct { using _: struct #raw_union { v: [3]f32, using _: struct { x: f32, y: f32, z: f32, }, using _: struct { azimuth: f32, pitch: f32, roll: f32, }, }, status: SensorStatus, reserved: [3]u8, }
NOTE: changes to these structs have to be backward compatible
AStorageManager ¶
AStorageManager :: struct {}
* * {@link AStorageManager} manages application OBB storage, a pointer * can be obtained with AStorageManager_new().
Related Procedures With Parameters
Related Procedures With Returns
AStorageManager_obbCallbackFunc ¶
* Callback function for asynchronous calls made on OBB files. * "state" is one of the following constants: {@link AOBB_STATE_MOUNTED} {@link AOBB_STATE_UNMOUNTED} {@link AOBB_STATE_ERROR_INTERNAL} {@link AOBB_STATE_ERROR_COULD_NOT_MOUNT} {@link AOBB_STATE_ERROR_COULD_NOT_UNMOUNT} {@link AOBB_STATE_ERROR_NOT_MOUNTED} {@link AOBB_STATE_ERROR_ALREADY_MOUNTED} {@link AOBB_STATE_ERROR_PERMISSION_DENIED}
Related Procedures With Parameters
ASurfaceControl ¶
ASurfaceControl :: struct {}
* * The SurfaceControl API can be used to provide a hierarchy of surfaces for * composition to the system compositor. ASurfaceControl represents a content node in * this hierarchy.
Related Procedures With Parameters
- ASurfaceControl_acquire
- ASurfaceControl_create
- ASurfaceControl_release
- ASurfaceTransactionStats_getAcquireTime
- ASurfaceTransactionStats_getPreviousReleaseFenceFd
- ASurfaceTransaction_clearFrameRate
- ASurfaceTransaction_reparent
- ASurfaceTransaction_setBuffer
- ASurfaceTransaction_setBufferAlpha
- ASurfaceTransaction_setBufferDataSpace
- ASurfaceTransaction_setBufferTransform
- ASurfaceTransaction_setBufferTransparency
- ASurfaceTransaction_setBufferWithRelease
- ASurfaceTransaction_setColor
- ASurfaceTransaction_setCrop
- ASurfaceTransaction_setDamageRegion
- ASurfaceTransaction_setDesiredHdrHeadroom
- ASurfaceTransaction_setEnableBackPressure
- ASurfaceTransaction_setExtendedRangeBrightness
- ASurfaceTransaction_setFrameRate
- ASurfaceTransaction_setFrameRateWithChangeStrategy
- ASurfaceTransaction_setGeometry
- ASurfaceTransaction_setHdrMetadata_cta861_3
- ASurfaceTransaction_setHdrMetadata_smpte2086
- ASurfaceTransaction_setPosition
- ASurfaceTransaction_setScale
- ASurfaceTransaction_setVisibility
- ASurfaceTransaction_setZOrder
Related Procedures With Returns
ASurfaceTexture ¶
ASurfaceTexture :: struct {}
* * {@link ASurfaceTexture} is an opaque type to manage SurfaceTexture from native code * * {@link ASurfaceTexture} can be obtained from an android.graphics.SurfaceTexture object using * ASurfaceTexture_fromSurfaceTexture().
Related Procedures With Parameters
Related Procedures With Returns
ASurfaceTransaction ¶
ASurfaceTransaction :: struct {}
* ASurfaceTransaction is a collection of updates to the surface tree that must be applied atomically.
Related Procedures With Parameters
- ASurfaceTransaction_apply
- ASurfaceTransaction_clearFrameRate
- ASurfaceTransaction_delete
- ASurfaceTransaction_reparent
- ASurfaceTransaction_setBuffer
- ASurfaceTransaction_setBufferAlpha
- ASurfaceTransaction_setBufferDataSpace
- ASurfaceTransaction_setBufferTransform
- ASurfaceTransaction_setBufferTransparency
- ASurfaceTransaction_setBufferWithRelease
- ASurfaceTransaction_setColor
- ASurfaceTransaction_setCrop
- ASurfaceTransaction_setDamageRegion
- ASurfaceTransaction_setDesiredHdrHeadroom
- ASurfaceTransaction_setDesiredPresentTime
- ASurfaceTransaction_setEnableBackPressure
- ASurfaceTransaction_setExtendedRangeBrightness
- ASurfaceTransaction_setFrameRate
- ASurfaceTransaction_setFrameRateWithChangeStrategy
- ASurfaceTransaction_setFrameTimeline
- ASurfaceTransaction_setGeometry
- ASurfaceTransaction_setHdrMetadata_cta861_3
- ASurfaceTransaction_setHdrMetadata_smpte2086
- ASurfaceTransaction_setOnCommit
- ASurfaceTransaction_setOnComplete
- ASurfaceTransaction_setPosition
- ASurfaceTransaction_setScale
- ASurfaceTransaction_setVisibility
- ASurfaceTransaction_setZOrder
Related Procedures With Returns
ASurfaceTransactionStats ¶
ASurfaceTransactionStats :: struct {}
* An opaque handle returned during a callback that can be used to query general stats and stats for surfaces which were either removed or for which buffers were updated after this transaction was applied.
ASurfaceTransaction_OnBufferRelease ¶
* The ASurfaceTransaction_OnBufferRelease callback is invoked when a buffer that was passed in ASurfaceTransaction_setBuffer is ready to be reused. * This callback is guaranteed to be invoked if ASurfaceTransaction_setBuffer is called with a non null buffer. If the buffer in the transaction is replaced via another call to ASurfaceTransaction_setBuffer, the callback will be invoked immediately. Otherwise the callback will be invoked before the ASurfaceTransaction_OnComplete callback after the buffer was presented. * If this callback is set, caller should not release the buffer using the ASurfaceTransaction_OnComplete. * \param context Optional context provided by the client that is passed into the callback. * \param release_fence_fd Returns the fence file descriptor used to signal the release of buffer associated with this callback. If this fence is valid (>=0), the buffer has not yet been released and the fence will signal when the buffer has been released. If the fence is -1 , the buffer is already released. The recipient of the callback takes ownership of the fence fd and is responsible for closing it. * THREADING The callback can be invoked on any thread. * Available since API level 36.
Related Procedures With Parameters
ASurfaceTransaction_OnCommit ¶
ASurfaceTransaction_OnCommit :: proc "c" (_context: rawptr, stats: ^ASurfaceTransactionStats)
* The ASurfaceTransaction_OnCommit callback is invoked when transaction is applied and the updates are ready to be presented. This callback will be invoked before the ASurfaceTransaction_OnComplete callback. * This callback does not mean buffers have been released! It simply means that any new transactions applied will not overwrite the transaction for which we are receiving a callback and instead will be included in the next frame. If you are trying to avoid dropping frames (overwriting transactions), and unable to use timestamps (Which provide a more efficient solution), then this method provides a method to pace your transaction application. * \param context Optional context provided by the client that is passed into the callback. * \param stats Opaque handle that can be passed to ASurfaceTransactionStats functions to query information about the transaction. The handle is only valid during the callback. Present and release fences are not available for this callback. Querying them using ASurfaceTransactionStats_getPresentFenceFd and ASurfaceTransactionStats_getPreviousReleaseFenceFd will result in failure. * THREADING The transaction committed callback can be invoked on any thread. * Available since API level 31.
Related Procedures With Parameters
ASurfaceTransaction_OnComplete ¶
ASurfaceTransaction_OnComplete :: proc "c" (_context: rawptr, stats: ^ASurfaceTransactionStats)
* Since the transactions are applied asynchronously, the ASurfaceTransaction_OnComplete callback can be used to be notified when a frame including the updates in a transaction was presented. * Buffers which are replaced or removed from the scene in the transaction invoking this callback may be reused after this point. * \param context Optional context provided by the client that is passed into the callback. * \param stats Opaque handle that can be passed to ASurfaceTransactionStats functions to query information about the transaction. The handle is only valid during the callback. * THREADING The transaction completed callback can be invoked on any thread. * Available since API level 29.
Related Procedures With Parameters
ASystemFontIterator ¶
ASystemFontIterator :: struct {}
* * ASystemFontIterator provides access to the system font configuration. * * ASystemFontIterator is an iterator for all available system font settings. * This iterator is not a thread-safe object. Do not pass this iterator to other threads.
Related Procedures With Parameters
Related Procedures With Returns
AThermalHeadroomThreshold ¶
AThermalHeadroomThreshold :: struct { headroom: f32, thermalStatus: AThermalStatus, }
* * This struct defines an instance of headroom threshold value and its status. * <p> * The value should be monotonically non-decreasing as the thermal status increases. * For {@link ATHERMAL_STATUS_SEVERE}, its headroom threshold is guaranteed to * be 1.0f. For status below severe status, the value should be lower or equal * to 1.0f, and for status above severe, the value should be larger or equal to 1.0f. * <p> * Also see {@link AThermal_getThermalHeadroom} for explanation on headroom, and * {@link AThermal_getThermalHeadroomThresholds} for how to use this.
AThermalManager ¶
AThermalManager :: struct {}
* * An opaque type representing a handle to a thermal manager. * An instance of thermal manager must be acquired prior to * using thermal status APIs and must be released after use. * * <p>To use:<ul> * <li>Create a new thermal manager instance by calling the * {@link AThermal_acquireManager} function.</li> * <li>Get current thermal status with * {@link AThermal_getCurrentThermalStatus}.</li> * <li>Register a thermal status listener with * {@link AThermal_registerThermalStatusListener}.</li> * <li>Unregister a thermal status listener with * {@link AThermal_unregisterThermalStatusListener}.</li> * <li>Release the thermal manager instance with * {@link AThermal_releaseManager}.</li></ul></p> *
Related Procedures With Parameters
Related Procedures With Returns
AThermalStatus ¶
AThermalStatus :: enum int { // Error in thermal status. ERROR = -1, // Not under throttling. NONE = 0, // Light throttling where UX is not impacted. LIGHT = 1, // Moderate throttling where UX is not largely impacted. MODERATE = 2, // Severe throttling where UX is largely impacted. SEVERE = 3, // Platform has done everything to reduce power. CRITICAL = 4, // * // * Key components in platform are shutting down due to thermal condition. // * Device functionalities will be limited. EMERGENCY = 5, // Need shutdown immediately. SHUTDOWN = 6, }
* * Thermal status used in function {@link AThermal_getCurrentThermalStatus} and * {@link AThermal_StatusCallback}.
Related Procedures With Returns
AThermal_StatusCallback ¶
AThermal_StatusCallback :: proc "c" (data: rawptr, status: AThermalStatus)
* * Prototype of the function that is called when thermal status changes. * It's passed the updated thermal status as parameter, as well as the * pointer provided by the client that registered a callback.
Related Procedures With Parameters
AVsyncId ¶
AVsyncId :: i64
* * The identifier of a frame timeline.
Related Procedures With Parameters
- AAsset_openFileDescriptor
- AAsset_openFileDescriptor64
- AAsset_seek
- AAsset_seek64
- AChoreographer_postFrameCallbackDelayed
- APerformanceHint_createSession
- APerformanceHint_reportActualWorkDuration
- APerformanceHint_updateTargetWorkDuration
- ASensorEventQueue_registerSensor
- ASurfaceTransaction_setDesiredPresentTime
- ASurfaceTransaction_setFrameTimeline
- ATrace_setCounter
- AWorkDuration_setActualCpuDurationNanos
- AWorkDuration_setActualGpuDurationNanos
- AWorkDuration_setActualTotalDurationNanos
- AWorkDuration_setWorkPeriodStartTimestampNanos
Related Procedures With Returns
- AAsset_getLength
- AAsset_getLength64
- AAsset_getRemainingLength
- AAsset_getRemainingLength64
- AChoreographerFrameCallbackData_getFrameTimeNanos
- AChoreographerFrameCallbackData_getFrameTimelineDeadlineNanos
- AChoreographerFrameCallbackData_getFrameTimelineExpectedPresentationTimeNanos
- AChoreographerFrameCallbackData_getFrameTimelineVsyncId
- AImageDecoderFrameInfo_getDuration
- AKeyEvent_getDownTime
- AKeyEvent_getEventTime
- AMotionEvent_getDownTime
- AMotionEvent_getEventTime
- AMotionEvent_getHistoricalEventTime
- APerformanceHint_getPreferredUpdateRateNanos
- ASurfaceTexture_getTimestamp
- ASurfaceTransactionStats_getAcquireTime
- ASurfaceTransactionStats_getLatchTime
AWorkDuration ¶
AWorkDuration :: struct {}
* * {@link AWorkDuration} is an opaque type that represents the breakdown of the * actual workload duration in each component internally. * * A new {@link AWorkDuration} can be obtained using * {@link AWorkDuration_create()}, when the client finishes using * {@link AWorkDuration}, {@link AWorkDuration_release()} must be * called to destroy and free up the resources associated with * {@link AWorkDuration}. * * This file provides a set of functions to allow clients to set the measured * work duration of each component on {@link AWorkDuration}. * * - AWorkDuration_setWorkPeriodStartTimestampNanos() * - AWorkDuration_setActualTotalDurationNanos() * - AWorkDuration_setActualCpuDurationNanos() * - AWorkDuration_setActualGpuDurationNanos()
Related Procedures With Parameters
Related Procedures With Returns
AndroidBitmapCompressFormat ¶
AndroidBitmapCompressFormat :: enum i32 { // * // * Compress to the JPEG format. quality of 0 means // * compress for the smallest size. 100 means compress for max // * visual quality. JPEG = 0, // * // * Compress to the PNG format. PNG is lossless, so quality is // * ignored. PNG = 1, // * // * Compress to the WEBP lossy format. quality of 0 means // * compress for the smallest size. 100 means compress for max // * visual quality. WEBP_LOSSY = 3, // * // * Compress to the WEBP lossless format. quality refers to how // * much effort to put into compression. A value of 0 means to // * compress quickly, resulting in a relatively large file size. // * 100 means to spend more time compressing, resulting in a // * smaller file. WEBP_LOSSLESS = 4, }
* * Specifies the formats that can be compressed to with * {@link AndroidBitmap_compress}.
Related Procedures With Parameters
AndroidBitmapFlags ¶
AndroidBitmapFlags :: enum u32 { // If this bit is set in BitmapInfo.flags, the Bitmap uses the // * HARDWARE Config, and its {@link AHardwareBuffer} can be retrieved via // * {@link AndroidBitmap_getHardwareBuffer}. IS_HARDWARE = 2147483648, }
AndroidBitmapFlagsAlpha ¶
AndroidBitmapFlagsAlpha :: enum int { // Pixel components are premultiplied by alpha. PREMUL = 0, // Pixels are opaque. OPAQUE = 1, // Pixel components are independent of alpha. UNPREMUL = 2, // Bit mask for BitmapInfo.flags to isolate the alpha. MASK = 3, // Shift for BitmapInfo.flags to isolate the alpha. SHIFT = 0, }
Bitmap alpha format
Related Procedures With Returns
AndroidBitmapFormat ¶
AndroidBitmapFormat :: enum i32 { // No format. NONE = 0, // Red: 8 bits, Green: 8 bits, Blue: 8 bits, Alpha: 8 bits. * RGBA_8888 = 1, // Red: 5 bits, Green: 6 bits, Blue: 5 bits. * RGB_565 = 4, // Deprecated in API level 13. Because of the poor quality of this configuration, it is advised to use ARGB_8888 instead. * RGBA_4444 = 7, // Alpha: 8 bits. A_8 = 8, // Each component is stored as a half float. * RGBA_F16 = 9, // Red: 10 bits, Green: 10 bits, Blue: 10 bits, Alpha: 2 bits. * RGBA_1010102 = 10, }
Bitmap pixel format.
Related Procedures With Parameters
Related Procedures With Returns
AndroidBitmap_CompressWriteFunc ¶
* * User-defined function for writing the output of compression. * * Available since API level 30. * * @param userContext Pointer to user-defined data passed to * {@link AndroidBitmap_compress}. * @param data Compressed data of |size| bytes to write. * @param size Length in bytes of data to write. * @return Whether the operation succeeded.
Related Procedures With Parameters
AppCmd ¶
AppCmd :: enum i32 { // * // * Command from main thread: the AInputQueue has changed. Upon processing // * this command, android_app->inputQueue will be updated to the new queue // * (or NULL). INPUT_CHANGED, // * // * Command from main thread: a new ANativeWindow is ready for use. Upon // * receiving this command, android_app->window will contain the new window // * surface. INIT_WINDOW, // * // * Command from main thread: the existing ANativeWindow needs to be // * terminated. Upon receiving this command, android_app->window still // * contains the existing window; after calling android_app_exec_cmd // * it will be set to NULL. TERM_WINDOW, // * // * Command from main thread: the current ANativeWindow has been resized. // * Please redraw with its new size. WINDOW_RESIZED, // * // * Command from main thread: the system needs that the current ANativeWindow // * be redrawn. You should redraw the window before handing this to // * android_app_exec_cmd() in order to avoid transient drawing glitches. WINDOW_REDRAW_NEEDED, // * // * Command from main thread: the content area of the window has changed, // * such as from the soft input window being shown or hidden. You can // * find the new content rect in android_app::contentRect. CONTENT_RECT_CHANGED, // * // * Command from main thread: the app's activity window has gained // * input focus. GAINED_FOCUS, // * // * Command from main thread: the app's activity window has lost // * input focus. LOST_FOCUS, // * // * Command from main thread: the current device configuration has changed. CONFIG_CHANGED, // * // * Command from main thread: the system is running low on memory. // * Try to reduce your memory use. LOW_MEMORY, // * // * Command from main thread: the app's activity has been started. START, // * // * Command from main thread: the app's activity has been resumed. RESUME, // * // * Command from main thread: the app should generate a new saved state // * for itself, to restore from later if needed. If you have saved state, // * allocate it with malloc and place it in android_app.savedState with // * the size in android_app.savedStateSize. The will be freed for you // * later. SAVE_STATE, // * // * Command from main thread: the app's activity has been paused. PAUSE, // * // * Command from main thread: the app's activity has been stopped. STOP, // * // * Command from main thread: the app's activity is being destroyed, // * and waiting for the app thread to clean up and exit before proceeding. DESTROY, }
AssetOpenMode ¶
AssetOpenMode :: enum i32 { // No specific information about how data will be accessed. * UNKNOWN = 0, // Read chunks, and seek forward and backward. RANDOM = 1, // Read sequentially, with an occasional forward seek. STREAMING = 2, // Caller plans to ask for a read-only buffer with all data. BUFFER = 3, }
Available access modes for opening assets with {@link AAssetManager_open}
Related Procedures With Parameters
BitmapInfo ¶
BitmapInfo :: struct { // The bitmap width in pixels. width: u32, // The bitmap height in pixels. height: u32, // The number of byte per row. stride: u32, // The bitmap pixel format. See {@link AndroidBitmapFormat} format: AndroidBitmapFormat, // Bitfield containing information about the bitmap. // * // * <p>Two bits are used to encode alpha. Use {@link AndroidBitmapFlagsAlpha.MASK} // * and {@link AndroidBitmapFlagsAlpha.SHIFT} to retrieve them.</p> // * // * <p>One bit is used to encode whether the Bitmap uses the HARDWARE Config. Use // * {@link AndroidBitmapFlags.IS_HARDWARE} to know.</p> // * // * <p>These flags were introduced in API level 30.</p> flags: u32, }
Bitmap info, see AndroidBitmap_getInfo().
Related Procedures With Parameters
DLextFlagsBits ¶
DLextFlagsBits :: enum u64 { // * // * When set, the `reserved_addr` and `reserved_size` fields must point to an // * already-reserved region of address space which will be used to load the // * library if it fits. // * // * If the reserved region is not large enough, loading will fail. RESERVED_ADDRESS = 0, // * // * Like `ANDROID_DLEXT_RESERVED_ADDRESS`, but if the reserved region is not large enough, // * the linker will choose an available address instead. RESERVED_ADDRESS_HINT = 1, // * // * When set, write the GNU RELRO section of the mapped library to `relro_fd` // * after relocation has been performed, to allow it to be reused by another // * process loading the same library at the same address. This implies // * `ANDROID_DLEXT_USE_RELRO`. // * // * This is mainly useful for the system WebView implementation. WRITE_RELRO = 2, // * // * When set, compare the GNU RELRO section of the mapped library to `relro_fd` // * after relocation has been performed, and replace any relocated pages that // * are identical with a version mapped from the file. // * // * This is mainly useful for the system WebView implementation. USE_RELRO = 3, // * // * Use `library_fd` instead of opening the file by name. // * The filename parameter is still used to identify the library. USE_LIBRARY_FD = 4, // * // * If opening a library using `library_fd` read it starting at `library_fd_offset`. // * This is mainly useful for loading a library stored within another file (such as uncompressed // * inside a ZIP archive). // * This flag is only valid when `ANDROID_DLEXT_USE_LIBRARY_FD` is set. USE_LIBRARY_FD_OFFSET = 5, // * // * When set, do not use `stat(2)` to check if the library has already been loaded. // * // * This flag allows forced loading of the library in the case when for some // * reason multiple ELF files share the same filename (because the already-loaded // * library has been removed and overwritten, for example). // * // * Note that if the library has the same `DT_SONAME` as an old one and some other // * library has the soname in its `DT_NEEDED` list, the first one will be used to resolve any // * dependencies. FORCE_LOAD = 6, // * // * This flag used to load library in a different namespace. The namespace is // * specified in `library_namespace`. // * // * This flag is for internal use only (since there is no NDK API for namespaces). USE_NAMESPACE = 9, // * // * Instructs dlopen to apply `ANDROID_DLEXT_RESERVED_ADDRESS`, // * `ANDROID_DLEXT_RESERVED_ADDRESS_HINT`, `ANDROID_DLEXT_WRITE_RELRO` and // * `ANDROID_DLEXT_USE_RELRO` to any libraries loaded as dependencies of the // * main library as well. // * // * This means that if the main library depends on one or more not-already-loaded libraries, they // * will be loaded consecutively into the region starting at `reserved_addr`, and `reserved_size` // * must be large enough to contain all of the libraries. The libraries will be loaded in the // * deterministic order constructed from the DT_NEEDED entries, rather than the more secure random // * order used by default. // * // * Each library's GNU RELRO sections will be written out to `relro_fd` in the same order they were // * loaded. This will mean that the resulting file is dependent on which of the libraries were // * already loaded, as only the newly loaded libraries will be included, not any already-loaded // * dependencies. The caller should ensure that the set of libraries newly loaded is consistent // * for this to be effective. // * // * This is mainly useful for the system WebView implementation. RESERVED_ADDRESS_RECURSIVE = 10, }
Bitfield definitions for android_dlextinfo::flags.
DeviceTypeCode ¶
DeviceTypeCode :: enum i32 { // The device type cannot be provided. UNKNOWN = 0, // The device does not fall into any category below. OTHER = 1, // The device runs NNAPI models on single or multi-core CPU. CPU = 2, // The device can run NNAPI models and also accelerate graphics APIs such // * as OpenGL ES and Vulkan. GPU = 3, // Dedicated accelerator for Machine Learning workloads. ACCELERATOR = 4, }
* * Device types. * * The type of NNAPI device.
Related Procedures With Parameters
DurationCode ¶
DurationCode :: enum i32 { // Execution time on hardware (not driver, which runs on host processor). DURATION_ON_HARDWARE = 0, // Execution time in driver (including time on hardware). Excludes overhead // such as that of the runtime itself and the IPC needed for the runtime to // communicate with the driver. DURATION_IN_DRIVER = 1, // Execution time on hardware, after all dependencies have been signaled. // If no dependencies specified (for example, if the execution was scheduled other // than with {@link ANeuralNetworksExecution_startComputeWithDependencies}), the // reported time will be the same as ANEURALNETWORKS_DURATION_ON_HARDWARE. // Available since NNAPI feature level 4. FENCED_DURATION_ON_HARDWARE = 2, // Execution time in driver, after all dependencies have been signaled. Excludes // overhead such as that of the runtime itself and the IPC needed for the runtime // to communicate with the driver. // If no dependencies specified (for example, if the execution was scheduled other // than with {@link ANeuralNetworksExecution_startComputeWithDependencies}), the // reported time will be the same as ANEURALNETWORKS_DURATION_IN_DRIVER. // Available since NNAPI feature level 4. FENCED_DURATION_IN_DRIVER = 3, }
* * Different duration measurements. * * Durations are measured in nanoseconds. * * Available since NNAPI feature level 3.
Related Procedures With Parameters
FamilyVariant ¶
FamilyVariant :: enum u32 { // A family variant value for the system default variant. DEFAULT = 0, // * // * A family variant value for the compact font family variant. // * // * The compact font family has Latin-based vertical metrics. COMPACT = 1, // * // * A family variant value for the elegant font family variant. // * // * The elegant font family may have larger vertical metrics than Latin font. ELEGANT = 2, }
Related Procedures With Parameters
FeatureLevelCode ¶
FeatureLevelCode :: enum i64 { // NNAPI specification available in Android O-MR1, Android NNAPI feature level 1 LEVEL_1 = 27, // NNAPI specification available in Android P, Android NNAPI feature level 2 LEVEL_2 = 28, // NNAPI specification available in Android Q, Android NNAPI feature level 3 LEVEL_3 = 29, // NNAPI specification available in Android R, Android NNAPI feature level 4 LEVEL_4 = 30, // * // * NNAPI specification available in Android S, Android NNAPI feature level 5. // * After Android S, the NNAPI specification can be updated between Android // * API releases. LEVEL_5 = 31, // Android NNAPI feature level 6 LEVEL_6 = 1000006, // Android NNAPI feature level 7 LEVEL_7 = 1000007, // Android NNAPI feature level 8 LEVEL_8 = 1000008, }
* * NNAPI feature levels. * * Each update of the NNAPI specification yields a new NNAPI feature level enum value. * NNAPI feature level corrseponds to an NNAPI specification version that a driver * and/or the NNAPI runtime can implement. * * A feature level up to and including "FEATURE_LEVEL_5" maps directly to * the Android API level that introduced the corresponding update of the NNAPI * specification. Feature levels after Android API level 31 have no association with * API level because the NNAPI specification can be updated between Android API * releases. Outputs of {@link ANeuralNetworksDevice_getFeatureLevel} and * {@link ANeuralNetworks_getRuntimeFeatureLevel} must be compared against * these enum values instead of the Android API level.
Related Procedures With Parameters
Related Procedures With Returns
FontWeight ¶
FontWeight :: enum u16 { // The minimum value fot the font weight value. MIN = 0, // A font weight value for the thin weight. THIN = 100, // A font weight value for the extra-light weight. EXTRA_LIGHT = 200, // A font weight value for the light weight. LIGHT = 300, // A font weight value for the normal weight. NORMAL = 400, // A font weight value for the medium weight. MEDIUM = 500, // A font weight value for the semi-bold weight. SEMI_BOLD = 600, // A font weight value for the bold weight. BOLD = 700, // A font weight value for the extra-bold weight. EXTRA_BOLD = 800, // A font weight value for the black weight. BLACK = 900, // The maximum value for the font weight value. MAX = 1000, }
Related Procedures With Parameters
Related Procedures With Returns
FuseCode ¶
FuseCode :: enum int { // NO fused activation function. NONE = 0, // Fused ReLU activation function. RELU = 1, // Fused ReLU1 activation function. RELU1 = 2, // Fused ReLU6 activation function. RELU6 = 3, }
* * Fused activation function types. * * Available since NNAPI feature level 1.
HideSoftInputFlags ¶
HideSoftInputFlags :: enum int { // * // * The soft input window should only be hidden if it was not // * explicitly shown by the user. IMPLICIT_ONLY = 1, // * // * The soft input window should normally be hidden, unless it was // * originally shown with {@link ANATIVEACTIVITY_SHOW_SOFT_INPUT_FORCED}. NOT_ALWAYS = 2, }
* Flags for ANativeActivity_hideSoftInput see the Java InputMethodManager API for documentation.
Related Procedures With Parameters
InputEventType ¶
InputEventType :: enum i32 { // Indicates that the input event is a key event. KEY = 1, // Indicates that the input event is a motion event. MOTION = 2, // Focus event FOCUS = 3, // Capture event CAPTURE = 4, // Drag event DRAG = 5, // TouchMode event TOUCH_MODE = 6, }
* * Input event types.
Related Procedures With Returns
InputSourceClass ¶
InputSourceClass :: bit_set[InputSourceClassBits; i32]
InputSourceClassBits ¶
InputSourceClassBits :: enum i32 { BUTTON = 0, POINTER = 1, NAVIGATION = 2, POSITION = 3, JOYSTICK = 4, }
* * Input source masks. * * Refer to the documentation on android.view.InputDevice for more details about input sources * and their correct interpretation.
InputSourceDeviceBits ¶
InputSourceDeviceBits :: enum i32 { KEYBOARD = 0, // BUTTON DPAD = 1, // BUTTON GAMEPAD = 2, // BUTTON TOUCHSCREEN = 4, // POINTER MOUSE = 5, // POINTER STYLUS = 6, // POINTER // This activates both bit 6 and 7 but doing 7 only is fine too. BLUETOOTH_STYLUS = 7, // POINTER TRACKBALL = 8, // NAVIGATION MOUSE_RELATIVE = 9, // NAVIGATION TOUCHPAD = 12, // POSITION TOUCH_NAVIGATION = 13, // NONE ROTARY_ENCODER = 14, // NONE JOYSTICK = 16, // JOYSTICK HDMI = 17, // BUTTON SENSOR = 18, // NONE }
* * Input sources.
JNIEnv ¶
JNIEnv :: ^JNINativeInterface
Related Procedures With Parameters
- AAssetManager_fromJava
- AFileDescriptor_create
- AFileDescriptor_getFd
- AFileDescriptor_setFd
- AHardwareBuffer_fromHardwareBuffer
- AHardwareBuffer_toHardwareBuffer
- AInputQueue_fromJava
- AKeyEvent_fromJava
- AMotionEvent_fromJava
- ANativeWindow_fromSurface
- ANativeWindow_toSurface
- ASharedMemory_dupFromJava
- ASurfaceTexture_fromSurfaceTexture
- AndroidBitmap_getDataSpace
- AndroidBitmap_getHardwareBuffer
- AndroidBitmap_getInfo
- AndroidBitmap_lockPixels
- AndroidBitmap_unlockPixels
JNIInvokeInterface ¶
JNIInvokeInterface :: struct { reserved0: rawptr, reserved1: rawptr, reserved2: rawptr, // NOTE: same here with JNIEnv. The two occurances of ^^^JNINativeInterface are originally ^^JNIEnv but // that seems to trigger some sort of race bug in the compiler where it thinks ^^JNIEnv is an invalid type // and throws a 'Invalid type usage "^^JNIEnv"' DestroyJavaVM: proc "c" (vm: ^^JNIInvokeInterface) -> i32, AttachCurrentThread: proc "c" (vm: ^^JNIInvokeInterface, p_env: ^^^JNINativeInterface, thr_args: rawptr) -> i32, DetachCurrentThread: proc "c" (vm: ^^JNIInvokeInterface) -> i32, GetEnv: proc "c" (vm: ^^JNIInvokeInterface, env: ^rawptr, version: i32) -> i32, AttachCurrentThreadAsDaemon: proc "c" (vm: ^^JNIInvokeInterface, p_env: ^^^JNINativeInterface, thr_args: ^rawptr) -> i32, }
* JNI invocation interface.
JNINativeInterface ¶
JNINativeInterface :: struct { reserved0: rawptr, reserved1: rawptr, reserved2: rawptr, reserved3: rawptr, GetVersion: proc "c" (jni: ^^JNINativeInterface) -> i32, DefineClass: proc "c" (jni: ^^JNINativeInterface, name: cstring, loader: jobject, #by_ptr buf: i8, bufLen: i32) -> jclass, FindClass: proc "c" (jni: ^^JNINativeInterface, name: cstring) -> jclass, FromReflectedMethod: proc "c" (jni: ^^JNINativeInterface, method: jobject) -> rawptr, FromReflectedField: proc "c" (jni: ^^JNINativeInterface, field: jobject) -> rawptr, // spec doesn't show jboolean parameter ToReflectedMethod: proc "c" (jni: ^^JNINativeInterface, cls: jclass, methodID: rawptr, isStatic: u8) -> jobject, GetSuperclass: proc "c" (jni: ^^JNINativeInterface, clazz: jclass) -> jclass, IsAssignableFrom: proc "c" (jni: ^^JNINativeInterface, clazz1: jclass, clazz2: jclass) -> u8, // spec doesn't show jboolean parameter ToReflectedField: proc "c" (jni: ^^JNINativeInterface, cls: jclass, fieldID: rawptr, isStatic: u8) -> jobject, Throw: proc "c" (jni: ^^JNINativeInterface, obj: jthrowable) -> i32, ThrowNew: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, message: cstring) -> i32, ExceptionOccurred: proc "c" (jni: ^^JNINativeInterface), ExceptionDescribe: proc "c" (jni: ^^JNINativeInterface), ExceptionClear: proc "c" (jni: ^^JNINativeInterface), FatalError: proc "c" (jni: ^^JNINativeInterface, msg: cstring), PushLocalFrame: proc "c" (jni: ^^JNINativeInterface, capacity: i32) -> i32, PopLocalFrame: proc "c" (jni: ^^JNINativeInterface, result: jobject) -> jobject, NewGlobalRef: proc "c" (jni: ^^JNINativeInterface, obj: jobject) -> jobject, DeleteGlobalRef: proc "c" (jni: ^^JNINativeInterface, globalRef: jobject), DeleteLocalRef: proc "c" (jni: ^^JNINativeInterface, localRef: jobject), IsSameObject: proc "c" (jni: ^^JNINativeInterface, ref1: jobject, ref2: jobject) -> u8, NewLocalRef: proc "c" (jni: ^^JNINativeInterface, ref: jobject) -> jobject, EnsureLocalCapacity: proc "c" (jni: ^^JNINativeInterface, capacity: i32) -> i32, AllocObject: proc "c" (jni: ^^JNINativeInterface, clazz: jclass) -> jobject, NewObject: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, .. args: ..any) -> jobject, NewObjectV: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> jobject, NewObjectA: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> jobject, GetObjectClass: proc "c" (jni: ^^JNINativeInterface, obj: jobject) -> jclass, IsInstanceOf: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass) -> u8, GetMethodID: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, name: cstring, sig: cstring) -> rawptr, // TODO: should va_list args be pointers? CallObjectMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, .. args: ..any) -> jobject, CallObjectMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, args: ^c.va_list) -> jobject, CallObjectMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, #by_ptr args: jvalue) -> jobject, CallBooleanMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, .. args: ..any) -> u8, CallBooleanMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, args: ^c.va_list) -> u8, CallBooleanMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, #by_ptr args: jvalue) -> u8, CallByteMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, .. args: ..any) -> i8, CallByteMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, args: ^c.va_list) -> i8, CallByteMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, #by_ptr args: jvalue) -> i8, CallCharMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, .. args: ..any) -> u16, CallCharMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, args: ^c.va_list) -> u16, CallCharMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, #by_ptr args: jvalue) -> u16, CallShortMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, .. args: ..any) -> i16, CallShortMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, args: ^c.va_list) -> i16, CallShortMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, #by_ptr args: jvalue) -> i16, CallIntMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, .. args: ..any) -> i32, CallIntMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, args: ^c.va_list) -> i32, CallIntMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, #by_ptr args: jvalue) -> i32, CallLongMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, .. args: ..any) -> i64, CallLongMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, args: ^c.va_list) -> i64, CallLongMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, #by_ptr args: jvalue) -> i64, CallFloatMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, .. args: ..any) -> f32, CallFloatMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, args: ^c.va_list) -> f32, CallFloatMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, #by_ptr args: jvalue) -> f32, CallDoubleMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, .. args: ..any) -> f64, CallDoubleMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, args: ^c.va_list) -> f64, CallDoubleMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, #by_ptr args: jvalue) -> f64, CallVoidMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, .. args: ..any), CallVoidMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, args: ^c.va_list), CallVoidMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, methodID: rawptr, #by_ptr args: jvalue), CallNonvirtualObjectMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, .. args: ..any) -> jobject, CallNonvirtualObjectMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> jobject, CallNonvirtualObjectMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> jobject, CallNonvirtualBooleanMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, .. args: ..any) -> u8, CallNonvirtualBooleanMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> u8, CallNonvirtualBooleanMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> u8, CallNonvirtualByteMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, .. args: ..any) -> i8, CallNonvirtualByteMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> i8, CallNonvirtualByteMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> i8, CallNonvirtualCharMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, .. args: ..any) -> u16, CallNonvirtualCharMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> u16, CallNonvirtualCharMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> u16, CallNonvirtualShortMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, .. args: ..any) -> i16, CallNonvirtualShortMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> i16, CallNonvirtualShortMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> i16, CallNonvirtualIntMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, .. args: ..any) -> i32, CallNonvirtualIntMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> i32, CallNonvirtualIntMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> i32, CallNonvirtualLongMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, .. args: ..any) -> i64, CallNonvirtualLongMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> i64, CallNonvirtualLongMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> i64, CallNonvirtualFloatMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, .. args: ..any) -> f32, CallNonvirtualFloatMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> f32, CallNonvirtualFloatMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> f32, CallNonvirtualDoubleMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, .. args: ..any) -> f64, CallNonvirtualDoubleMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> f64, CallNonvirtualDoubleMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> f64, CallNonvirtualVoidMethod: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, .. args: ..any), CallNonvirtualVoidMethodV: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, args: ^c.va_list), CallNonvirtualVoidMethodA: proc "c" (jni: ^^JNINativeInterface, obj: jobject, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue), GetFieldID: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, name: cstring, sig: cstring) -> rawptr, GetObjectField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr) -> jobject, GetBooleanField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr) -> u8, GetByteField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr) -> i8, GetCharField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr) -> u16, GetShortField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr) -> i16, GetIntField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr) -> i32, GetLongField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr) -> i64, GetFloatField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr) -> f32, GetDoubleField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr) -> f64, SetObjectField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr, value: jobject), SetBooleanField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr, value: u8), SetByteField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr, value: i8), SetCharField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr, value: u16), SetShortField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr, value: i16), SetIntField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr, value: i32), SetLongField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr, value: i64), SetFloatField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr, value: f32), SetDoubleField: proc "c" (jni: ^^JNINativeInterface, obj: jobject, fieldID: rawptr, value: f64), GetStaticMethodID: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, name: cstring, sig: cstring) -> rawptr, CallStaticObjectMethod: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, .. args: ..any) -> jobject, CallStaticObjectMethodV: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> jobject, CallStaticObjectMethodA: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> jobject, CallStaticBooleanMethod: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, .. args: ..any) -> u8, CallStaticBooleanMethodV: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> u8, CallStaticBooleanMethodA: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> u8, CallStaticByteMethod: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, .. args: ..any) -> i8, CallStaticByteMethodV: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> i8, CallStaticByteMethodA: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> i8, CallStaticCharMethod: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, .. args: ..any) -> u16, CallStaticCharMethodV: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> u16, CallStaticCharMethodA: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> u16, CallStaticShortMethod: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, .. args: ..any) -> i16, CallStaticShortMethodV: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> i16, CallStaticShortMethodA: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> i16, CallStaticIntMethod: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, .. args: ..any) -> i32, CallStaticIntMethodV: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> i32, CallStaticIntMethodA: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> i32, CallStaticLongMethod: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, .. args: ..any) -> i64, CallStaticLongMethodV: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> i64, CallStaticLongMethodA: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> i64, CallStaticFloatMethod: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, .. args: ..any) -> f32, CallStaticFloatMethodV: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> f32, CallStaticFloatMethodA: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> f32, CallStaticDoubleMethod: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, .. args: ..any) -> f64, CallStaticDoubleMethodV: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, args: ^c.va_list) -> f64, CallStaticDoubleMethodA: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue) -> f64, CallStaticVoidMethod: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, .. args: ..any), CallStaticVoidMethodV: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, args: ^c.va_list), CallStaticVoidMethodA: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methodID: rawptr, #by_ptr args: jvalue), GetStaticFieldID: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, name: cstring, sig: cstring) -> rawptr, GetStaticObjectField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr) -> jobject, GetStaticBooleanField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr) -> u8, GetStaticByteField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr) -> i8, GetStaticCharField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr) -> u16, GetStaticShortField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr) -> i16, GetStaticIntField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr) -> i32, GetStaticLongField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr) -> i64, GetStaticFloatField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr) -> f32, GetStaticDoubleField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr) -> f64, SetStaticObjectField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr, value: jobject), SetStaticBooleanField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr, value: u8), SetStaticByteField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr, value: i8), SetStaticCharField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr, value: u16), SetStaticShortField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr, value: i16), SetStaticIntField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr, value: i32), SetStaticLongField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr, value: i64), SetStaticFloatField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr, value: f32), SetStaticDoubleField: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, fieldID: rawptr, value: f64), NewString: proc "c" (jni: ^^JNINativeInterface, unicodeChars: [^]u16, len: i32) -> jstring, GetStringLength: proc "c" (jni: ^^JNINativeInterface, str: jstring) -> i32, GetStringChars: proc "c" (jni: ^^JNINativeInterface, str: jstring, isCopy: ^u8) -> [^]u16, ReleaseStringChars: proc "c" (jni: ^^JNINativeInterface, str: jstring, chars: [^]u16), NewStringUTF: proc "c" (jni: ^^JNINativeInterface, bytes: cstring) -> jstring, GetStringUTFLength: proc "c" (jni: ^^JNINativeInterface, str: jstring) -> i32, // JNI spec says this returns const jbyte*, but that's inconsistent GetStringUTFChars: proc "c" (jni: ^^JNINativeInterface, str: jstring, isCopy: ^u8) -> cstring, ReleaseStringUTFChars: proc "c" (jni: ^^JNINativeInterface, str: jstring, utf: cstring), GetArrayLength: proc "c" (jni: ^^JNINativeInterface, array: jarray) -> i32, NewObjectArray: proc "c" (jni: ^^JNINativeInterface, length: i32, elementClass: jclass, initialElement: jobject) -> jobjectArray, GetObjectArrayElement: proc "c" (jni: ^^JNINativeInterface, array: jobjectArray, index: i32) -> jobject, SetObjectArrayElement: proc "c" (jni: ^^JNINativeInterface, array: jobjectArray, index: i32, value: jobject), NewBooleanArray: proc "c" (jni: ^^JNINativeInterface, length: i32) -> jbooleanArray, NewByteArray: proc "c" (jni: ^^JNINativeInterface, length: i32) -> jbyteArray, NewCharArray: proc "c" (jni: ^^JNINativeInterface, length: i32) -> jcharArray, NewShortArray: proc "c" (jni: ^^JNINativeInterface, length: i32) -> jshortArray, NewIntArray: proc "c" (jni: ^^JNINativeInterface, length: i32) -> jintArray, NewLongArray: proc "c" (jni: ^^JNINativeInterface, length: i32) -> jlongArray, NewFloatArray: proc "c" (jni: ^^JNINativeInterface, length: i32) -> jfloatArray, NewDoubleArray: proc "c" (jni: ^^JNINativeInterface, length: i32) -> jdoubleArray, GetBooleanArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jbooleanArray, isCopy: ^u8) -> [^]u8, GetByteArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jbyteArray, isCopy: ^u8) -> [^]i8, GetCharArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jcharArray, isCopy: ^u8) -> [^]u16, GetShortArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jshortArray, isCopy: ^u8) -> [^]i16, GetIntArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jintArray, isCopy: ^u8) -> [^]i32, GetLongArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jlongArray, isCopy: ^u8) -> [^]i64, GetFloatArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jfloatArray, isCopy: ^u8) -> [^]f32, GetDoubleArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jdoubleArray, isCopy: ^u8) -> [^]f64, ReleaseBooleanArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jbooleanArray, elems: [^]u8, mode: i32), ReleaseByteArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jbyteArray, elems: [^]i8, mode: i32), ReleaseCharArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jcharArray, elems: [^]u16, mode: i32), ReleaseShortArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jshortArray, elems: [^]i16, mode: i32), ReleaseIntArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jintArray, elems: [^]i32, mode: i32), ReleaseLongArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jlongArray, elems: [^]i64, mode: i32), ReleaseFloatArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jfloatArray, elems: [^]f32, mode: i32), ReleaseDoubleArrayElements: proc "c" (jni: ^^JNINativeInterface, array: jdoubleArray, elems: [^]f64, mode: i32), GetBooleanArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jbooleanArray, start: i32, len: i32, buf: [^]u8), GetByteArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jbyteArray, start: i32, len: i32, buf: [^]i8), GetCharArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jcharArray, start: i32, len: i32, buf: [^]u16), GetShortArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jshortArray, start: i32, len: i32, buf: [^]i16), GetIntArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jintArray, start: i32, len: i32, buf: [^]i32), GetLongArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jlongArray, start: i32, len: i32, buf: [^]i64), GetFloatArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jfloatArray, start: i32, len: i32, buf: [^]f32), GetDoubleArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jdoubleArray, start: i32, len: i32, buf: [^]f64), // spec shows these without const, some jni.h do, some don't SetBooleanArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jbooleanArray, start: i32, len: i32, buf: [^]u8), SetByteArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jbyteArray, start: i32, len: i32, buf: [^]i8), SetCharArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jcharArray, start: i32, len: i32, buf: [^]u16), SetShortArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jshortArray, start: i32, len: i32, buf: [^]i16), SetIntArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jintArray, start: i32, len: i32, buf: [^]i32), SetLongArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jlongArray, start: i32, len: i32, buf: [^]i64), SetFloatArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jfloatArray, start: i32, len: i32, buf: [^]f32), SetDoubleArrayRegion: proc "c" (jni: ^^JNINativeInterface, array: jdoubleArray, start: i32, len: i32, buf: [^]f64), RegisterNatives: proc "c" (jni: ^^JNINativeInterface, clazz: jclass, methods: [^]JNINativeMethod, nMethods: i32) -> i32, UnregisterNatives: proc "c" (jni: ^^JNINativeInterface, clazz: jclass) -> i32, MonitorEnter: proc "c" (jni: ^^JNINativeInterface, obj: jobject) -> i32, MonitorExit: proc "c" (jni: ^^JNINativeInterface, obj: jobject) -> i32, GetJavaVM: proc "c" (jni: ^^JNINativeInterface, vm: ^^^JNIInvokeInterface) -> i32, GetStringRegion: proc "c" (jni: ^^JNINativeInterface, str: jstring, start: i32, len: i32, buf: [^]u16), GetStringUTFRegion: proc "c" (jni: ^^JNINativeInterface, str: jstring, start: i32, len: i32, buf: [^]u8), GetPrimitiveArrayCritical: proc "c" (jni: ^^JNINativeInterface, array: jarray, isCopy: ^u8) -> rawptr, ReleasePrimitiveArrayCritical: proc "c" (jni: ^^JNINativeInterface, array: jarray, carray: rawptr, mode: i32), GetStringCritical: proc "c" (jni: ^^JNINativeInterface, str: jstring, isCopy: ^u8) -> [^]u16, ReleaseStringCritical: proc "c" (jni: ^^JNINativeInterface, str: jstring, carray: [^]u16), NewWeakGlobalRef: proc "c" (jni: ^^JNINativeInterface, obj: jobject) -> jweak, DeleteWeakGlobalRef: proc "c" (jni: ^^JNINativeInterface, obj: jweak), ExceptionCheck: proc "c" (jni: ^^JNINativeInterface), NewDirectByteBuffer: proc "c" (jni: ^^JNINativeInterface, address: rawptr, capacity: i64) -> jobject, GetDirectBufferAddress: proc "c" (jni: ^^JNINativeInterface, buf: jobject) -> rawptr, GetDirectBufferCapacity: proc "c" (jni: ^^JNINativeInterface, buf: jobject) -> i64, // added in JNI 1.6 GetObjectRefType: proc "c" (jni: ^^JNINativeInterface, obj: jobject) -> jobjectRefType, }
* Table of interface function pointers.
JavaVM ¶
JavaVM :: ^JNIInvokeInterface
KeyBoardType ¶
KeyBoardType :: enum int { NONE = 0, NON_ALPHABETIC = 1, TYPE_ALPHABETIC = 2, }
* * Keyboard types. * * Refer to the documentation on android.view.InputDevice for more details. * Note: When adding a new keyboard type here InputDeviceInfo::setKeyboardType needs to be updated.
KeyEventAction ¶
KeyEventAction :: enum i32 { // The key has been pressed down. DOWN = 0, // The key has been released. UP = 1, // * // * Multiple duplicate key events have occurred in a row, or a // * complex string is being delivered. The repeat_count property // * of the key event contains the number of times the given key // * code should be executed. MULTIPLE = 2, }
* * Key event actions.
Related Procedures With Returns
KeyEventFlagsBits ¶
KeyEventFlagsBits :: enum int { // This mask is set if the device woke because of this key event. WOKE_HERE = 0, // This mask is set if the key event was generated by a software keyboard. SOFT_KEYBOARD = 1, // This mask is set if we don't want the key event to cause us to leave touch mode. KEEP_TOUCH_MODE = 2, // * // * This mask is set if an event was known to come from a trusted // * part of the system. That is, the event is known to come from // * the user, and could not have been spoofed by a third party // * component. FROM_SYSTEM = 3, // * // * This mask is used for compatibility, to identify enter keys that are // * coming from an IME whose enter key has been auto-labelled "next" or // * "done". This allows TextView to dispatch these as normal enter keys // * for old applications, but still do the appropriate action when // * receiving them. EDITOR_ACTION = 4, // * // * When associated with up key events, this indicates that the key press // * has been canceled. Typically this is used with virtual touch screen // * keys, where the user can slide from the virtual key area on to the // * display: in that case, the application will receive a canceled up // * event and should not perform the action normally associated with the // * key. Note that for this to work, the application can not perform an // * action for a key until it receives an up or the long press timeout has // * expired. CANCELED = 5, // * // * This key event was generated by a virtual (on-screen) hard key area. // * Typically this is an area of the touchscreen, outside of the regular // * display, dedicated to "hardware" buttons. VIRTUAL_HARD_KEY = 6, // * // * This flag is set for the first key repeat that occurs after the // * long press timeout. LONG_PRESS = 7, // * // * Set when a key event has #AKEY_EVENT_FLAG_CANCELED set because a long // * press action was executed while it was down. CANCELED_LONG_PRESS = 8, // * // * Set for #AKEY_EVENT_ACTION_UP when this event's key code is still being // * tracked from its initial down. That is, somebody requested that tracking // * started on the key down and a long press has not caused // * the tracking to be canceled. FLAG_TRACKING = 9, // * // * Set when a key event has been synthesized to implement default behavior // * for an event that the application did not handle. // * Fallback key events are generated by unhandled trackball motions // * (to emulate a directional keypad) and by certain unhandled key presses // * that are declared in the key map (such as special function numeric keypad // * keys when numlock is off). FLAG_FALLBACK = 10, }
* * Key event flags.
KeyState ¶
KeyState :: enum int { // The key state is unknown or the requested key itself is not supported. UNKNOWN = -1, // The key is up. UP = 0, // The key is down. DOWN = 1, // The key is down but is a virtual key press that is being emulated by the system. VIRTUAL = 2, }
* * Key states (may be returned by queries about the current state of a * particular key code, scan code or switch).
Keycode ¶
Keycode :: enum i32 { // Unknown key code. UNKNOWN = 0, // Soft Left key. // * Usually situated below the display on phones and used as a multi-function // * feature key for selecting a software defined function shown on the bottom left // * of the display. SOFT_LEFT = 1, // Soft Right key. // * Usually situated below the display on phones and used as a multi-function // * feature key for selecting a software defined function shown on the bottom right // * of the display. SOFT_RIGHT = 2, // Home key. // * This key is handled by the framework and is never delivered to applications. HOME = 3, // Back key. BACK = 4, // Call key. CALL = 5, // End Call key. ENDCALL = 6, // '0' key. KEY_0 = 7, // '1' key. KEY_1 = 8, // '2' key. KEY_2 = 9, // '3' key. KEY_3 = 10, // '4' key. KEY_4 = 11, // '5' key. KEY_5 = 12, // '6' key. KEY_6 = 13, // '7' key. KEY_7 = 14, // '8' key. KEY_8 = 15, // '9' key. KEY_9 = 16, // '*' key. STAR = 17, // '#' key. POUND = 18, // Directional Pad Up key. // * May also be synthesized from trackball motions. DPAD_UP = 19, // Directional Pad Down key. // * May also be synthesized from trackball motions. DPAD_DOWN = 20, // Directional Pad Left key. // * May also be synthesized from trackball motions. DPAD_LEFT = 21, // Directional Pad Right key. // * May also be synthesized from trackball motions. DPAD_RIGHT = 22, // Directional Pad Center key. // * May also be synthesized from trackball motions. DPAD_CENTER = 23, // Volume Up key. // * Adjusts the speaker volume up. VOLUME_UP = 24, // Volume Down key. // * Adjusts the speaker volume down. VOLUME_DOWN = 25, // Power key. POWER = 26, // Camera key. // * Used to launch a camera application or take pictures. CAMERA = 27, // Clear key. CLEAR = 28, // 'A' key. A = 29, // 'B' key. B = 30, // 'C' key. C = 31, // 'D' key. D = 32, // 'E' key. E = 33, // 'F' key. F = 34, // 'G' key. G = 35, // 'H' key. H = 36, // 'I' key. I = 37, // 'J' key. J = 38, // 'K' key. K = 39, // 'L' key. L = 40, // 'M' key. M = 41, // 'N' key. N = 42, // 'O' key. O = 43, // 'P' key. P = 44, // 'Q' key. Q = 45, // 'R' key. R = 46, // 'S' key. S = 47, // 'T' key. T = 48, // 'U' key. U = 49, // 'V' key. V = 50, // 'W' key. W = 51, // 'X' key. X = 52, // 'Y' key. Y = 53, // 'Z' key. Z = 54, // ',' key. COMMA = 55, // '.' key. PERIOD = 56, // Left Alt modifier key. ALT_LEFT = 57, // Right Alt modifier key. ALT_RIGHT = 58, // Left Shift modifier key. SHIFT_LEFT = 59, // Right Shift modifier key. SHIFT_RIGHT = 60, // Tab key. TAB = 61, // Space key. SPACE = 62, // Symbol modifier key. // * Used to enter alternate symbols. SYM = 63, // Explorer special function key. // * Used to launch a browser application. EXPLORER = 64, // Envelope special function key. // * Used to launch a mail application. ENVELOPE = 65, // Enter key. ENTER = 66, // Backspace key. // * Deletes characters before the insertion point, unlike {@link FORWARD_DEL}. DEL = 67, // '`' (backtick) key. GRAVE = 68, // '-'. MINUS = 69, // '=' key. EQUALS = 70, // '[' key. LEFT_BRACKET = 71, // ']' key. RIGHT_BRACKET = 72, // '\' key. BACKSLASH = 73, // ';' key. SEMICOLON = 74, // ''' (apostrophe) key. APOSTROPHE = 75, // '/' key. SLASH = 76, // '@' key. AT = 77, // Number modifier key. // * Used to enter numeric symbols. // * This key is not {@link NUM_LOCK}; it is more like {@link AKEYCODE_ALT_LEFT}. NUM = 78, // Headset Hook key. // * Used to hang up calls and stop media. HEADSETHOOK = 79, // Camera Focus key. // * Used to focus the camera. FOCUS = 80, // '+' key. PLUS = 81, // Menu key. MENU = 82, // Notification key. NOTIFICATION = 83, // Search key. SEARCH = 84, // Play/Pause media key. MEDIA_PLAY_PAUSE = 85, // Stop media key. MEDIA_STOP = 86, // Play Next media key. MEDIA_NEXT = 87, // Play Previous media key. MEDIA_PREVIOUS = 88, // Rewind media key. MEDIA_REWIND = 89, // Fast Forward media key. MEDIA_FAST_FORWARD = 90, // Mute key. // * Mutes the microphone, unlike {@link VOLUME_MUTE}. MUTE = 91, // Page Up key. PAGE_UP = 92, // Page Down key. PAGE_DOWN = 93, // Picture Symbols modifier key. // * Used to switch symbol sets (Emoji, Kao-moji). PICTSYMBOLS = 94, // Switch Charset modifier key. // * Used to switch character sets (Kanji, Katakana). SWITCH_CHARSET = 95, // A Button key. // * On a game controller, the A button should be either the button labeled A // * or the first button on the bottom row of controller buttons. BUTTON_A = 96, // B Button key. // * On a game controller, the B button should be either the button labeled B // * or the second button on the bottom row of controller buttons. BUTTON_B = 97, // C Button key. // * On a game controller, the C button should be either the button labeled C // * or the third button on the bottom row of controller buttons. BUTTON_C = 98, // X Button key. // * On a game controller, the X button should be either the button labeled X // * or the first button on the upper row of controller buttons. BUTTON_X = 99, // Y Button key. // * On a game controller, the Y button should be either the button labeled Y // * or the second button on the upper row of controller buttons. BUTTON_Y = 100, // Z Button key. // * On a game controller, the Z button should be either the button labeled Z // * or the third button on the upper row of controller buttons. BUTTON_Z = 101, // L1 Button key. // * On a game controller, the L1 button should be either the button labeled L1 (or L) // * or the top left trigger button. BUTTON_L1 = 102, // R1 Button key. // * On a game controller, the R1 button should be either the button labeled R1 (or R) // * or the top right trigger button. BUTTON_R1 = 103, // L2 Button key. // * On a game controller, the L2 button should be either the button labeled L2 // * or the bottom left trigger button. BUTTON_L2 = 104, // R2 Button key. // * On a game controller, the R2 button should be either the button labeled R2 // * or the bottom right trigger button. BUTTON_R2 = 105, // Left Thumb Button key. // * On a game controller, the left thumb button indicates that the left (or only) // * joystick is pressed. BUTTON_THUMBL = 106, // Right Thumb Button key. // * On a game controller, the right thumb button indicates that the right // * joystick is pressed. BUTTON_THUMBR = 107, // Start Button key. // * On a game controller, the button labeled Start. BUTTON_START = 108, // Select Button key. // * On a game controller, the button labeled Select. BUTTON_SELECT = 109, // Mode Button key. // * On a game controller, the button labeled Mode. BUTTON_MODE = 110, // Escape key. ESCAPE = 111, // Forward Delete key. // * Deletes characters ahead of the insertion point, unlike {@link DEL}. FORWARD_DEL = 112, // Left Control modifier key. CTRL_LEFT = 113, // Right Control modifier key. CTRL_RIGHT = 114, // Caps Lock key. CAPS_LOCK = 115, // Scroll Lock key. SCROLL_LOCK = 116, // Left Meta modifier key. META_LEFT = 117, // Right Meta modifier key. META_RIGHT = 118, // Function modifier key. FUNCTION = 119, // System Request / Print Screen key. SYSRQ = 120, // Break / Pause key. BREAK = 121, // Home Movement key. // * Used for scrolling or moving the cursor around to the start of a line // * or to the top of a list. MOVE_HOME = 122, // End Movement key. // * Used for scrolling or moving the cursor around to the end of a line // * or to the bottom of a list. MOVE_END = 123, // Insert key. // * Toggles insert / overwrite edit mode. INSERT = 124, // Forward key. // * Navigates forward in the history stack. Complement of {@link BACK}. FORWARD = 125, // Play media key. MEDIA_PLAY = 126, // Pause media key. MEDIA_PAUSE = 127, // Close media key. // * May be used to close a CD tray, for example. MEDIA_CLOSE = 128, // Eject media key. // * May be used to eject a CD tray, for example. MEDIA_EJECT = 129, // Record media key. MEDIA_RECORD = 130, // F1 key. F1 = 131, // F2 key. F2 = 132, // F3 key. F3 = 133, // F4 key. F4 = 134, // F5 key. F5 = 135, // F6 key. F6 = 136, // F7 key. F7 = 137, // F8 key. F8 = 138, // F9 key. F9 = 139, // F10 key. F10 = 140, // F11 key. F11 = 141, // F12 key. F12 = 142, // Num Lock key. // * This is the Num Lock key; it is different from {@link NUM}. // * This key alters the behavior of other keys on the numeric keypad. NUM_LOCK = 143, // Numeric keypad '0' key. NUMPAD_0 = 144, // Numeric keypad '1' key. NUMPAD_1 = 145, // Numeric keypad '2' key. NUMPAD_2 = 146, // Numeric keypad '3' key. NUMPAD_3 = 147, // Numeric keypad '4' key. NUMPAD_4 = 148, // Numeric keypad '5' key. NUMPAD_5 = 149, // Numeric keypad '6' key. NUMPAD_6 = 150, // Numeric keypad '7' key. NUMPAD_7 = 151, // Numeric keypad '8' key. NUMPAD_8 = 152, // Numeric keypad '9' key. NUMPAD_9 = 153, // Numeric keypad '/' key (for division). NUMPAD_DIVIDE = 154, // Numeric keypad '*' key (for multiplication). NUMPAD_MULTIPLY = 155, // Numeric keypad '-' key (for subtraction). NUMPAD_SUBTRACT = 156, // Numeric keypad '+' key (for addition). NUMPAD_ADD = 157, // Numeric keypad '.' key (for decimals or digit grouping). NUMPAD_DOT = 158, // Numeric keypad ',' key (for decimals or digit grouping). NUMPAD_COMMA = 159, // Numeric keypad Enter key. NUMPAD_ENTER = 160, // Numeric keypad '=' key. NUMPAD_EQUALS = 161, // Numeric keypad '(' key. NUMPAD_LEFT_PAREN = 162, // Numeric keypad ')' key. NUMPAD_RIGHT_PAREN = 163, // Volume Mute key. // * Mutes the speaker, unlike {@link MUTE}. // * This key should normally be implemented as a toggle such that the first press // * mutes the speaker and the second press restores the original volume. VOLUME_MUTE = 164, // Info key. // * Common on TV remotes to show additional information related to what is // * currently being viewed. INFO = 165, // Channel up key. // * On TV remotes, increments the television channel. CHANNEL_UP = 166, // Channel down key. // * On TV remotes, decrements the television channel. CHANNEL_DOWN = 167, // Zoom in key. ZOOM_IN = 168, // Zoom out key. ZOOM_OUT = 169, // TV key. // * On TV remotes, switches to viewing live TV. TV = 170, // Window key. // * On TV remotes, toggles picture-in-picture mode or other windowing functions. WINDOW = 171, // Guide key. // * On TV remotes, shows a programming guide. GUIDE = 172, // DVR key. // * On some TV remotes, switches to a DVR mode for recorded shows. DVR = 173, // Bookmark key. // * On some TV remotes, bookmarks content or web pages. BOOKMARK = 174, // Toggle captions key. // * Switches the mode for closed-captioning text, for example during television shows. CAPTIONS = 175, // Settings key. // * Starts the system settings activity. SETTINGS = 176, // TV power key. // * On TV remotes, toggles the power on a television screen. TV_POWER = 177, // TV input key. // * On TV remotes, switches the input on a television screen. TV_INPUT = 178, // Set-top-box power key. // * On TV remotes, toggles the power on an external Set-top-box. STB_POWER = 179, // Set-top-box input key. // * On TV remotes, switches the input mode on an external Set-top-box. STB_INPUT = 180, // A/V Receiver power key. // * On TV remotes, toggles the power on an external A/V Receiver. AVR_POWER = 181, // A/V Receiver input key. // * On TV remotes, switches the input mode on an external A/V Receiver. AVR_INPUT = 182, // Red "programmable" key. // * On TV remotes, acts as a contextual/programmable key. PROG_RED = 183, // Green "programmable" key. // * On TV remotes, actsas a contextual/programmable key. PROG_GREEN = 184, // Yellow "programmable" key. // * On TV remotes, acts as a contextual/programmable key. PROG_YELLOW = 185, // Blue "programmable" key. // * On TV remotes, acts as a contextual/programmable key. PROG_BLUE = 186, // App switch key. // * Should bring up the application switcher dialog. APP_SWITCH = 187, // Generic Game Pad Button #1. BUTTON_1 = 188, // Generic Game Pad Button #2. BUTTON_2 = 189, // Generic Game Pad Button #3. BUTTON_3 = 190, // Generic Game Pad Button #4. BUTTON_4 = 191, // Generic Game Pad Button #5. BUTTON_5 = 192, // Generic Game Pad Button #6. BUTTON_6 = 193, // Generic Game Pad Button #7. BUTTON_7 = 194, // Generic Game Pad Button #8. BUTTON_8 = 195, // Generic Game Pad Button #9. BUTTON_9 = 196, // Generic Game Pad Button #10. BUTTON_10 = 197, // Generic Game Pad Button #11. BUTTON_11 = 198, // Generic Game Pad Button #12. BUTTON_12 = 199, // Generic Game Pad Button #13. BUTTON_13 = 200, // Generic Game Pad Button #14. BUTTON_14 = 201, // Generic Game Pad Button #15. BUTTON_15 = 202, // Generic Game Pad Button #16. BUTTON_16 = 203, // Language Switch key. // * Toggles the current input language such as switching between English and Japanese on // * a QWERTY keyboard. On some devices, the same function may be performed by // * pressing Shift+Spacebar. LANGUAGE_SWITCH = 204, // Manner Mode key. // * Toggles silent or vibrate mode on and off to make the device behave more politely // * in certain settings such as on a crowded train. On some devices, the key may only // * operate when long-pressed. MANNER_MODE = 205, // 3D Mode key. // * Toggles the display between 2D and 3D mode. _3D_MODE = 206, // Contacts special function key. // * Used to launch an address book application. CONTACTS = 207, // Calendar special function key. // * Used to launch a calendar application. CALENDAR = 208, // Music special function key. // * Used to launch a music player application. MUSIC = 209, // Calculator special function key. // * Used to launch a calculator application. CALCULATOR = 210, // Japanese full-width / half-width key. ZENKAKU_HANKAKU = 211, // Japanese alphanumeric key. EISU = 212, // Japanese non-conversion key. MUHENKAN = 213, // Japanese conversion key. HENKAN = 214, // Japanese katakana / hiragana key. KATAKANA_HIRAGANA = 215, // Japanese Yen key. YEN = 216, // Japanese Ro key. RO = 217, // Japanese kana key. KANA = 218, // Assist key. // * Launches the global assist activity. Not delivered to applications. ASSIST = 219, // Brightness Down key. // * Adjusts the screen brightness down. BRIGHTNESS_DOWN = 220, // Brightness Up key. // * Adjusts the screen brightness up. BRIGHTNESS_UP = 221, // Audio Track key. // * Switches the audio tracks. MEDIA_AUDIO_TRACK = 222, // Sleep key. // * Puts the device to sleep. Behaves somewhat like {@link POWER} but it // * has no effect if the device is already asleep. SLEEP = 223, // Wakeup key. // * Wakes up the device. Behaves somewhat like {@link POWER} but it // * has no effect if the device is already awake. WAKEUP = 224, // Pairing key. // * Initiates peripheral pairing mode. Useful for pairing remote control // * devices or game controllers, especially if no other input mode is // * available. PAIRING = 225, // Media Top Menu key. // * Goes to the top of media menu. MEDIA_TOP_MENU = 226, // '11' key. KEY_11 = 227, // '12' key. KEY_12 = 228, // Last Channel key. // * Goes to the last viewed channel. LAST_CHANNEL = 229, // TV data service key. // * Displays data services like weather, sports. TV_DATA_SERVICE = 230, // Voice Assist key. // * Launches the global voice assist activity. Not delivered to applications. VOICE_ASSIST = 231, // Radio key. // * Toggles TV service / Radio service. TV_RADIO_SERVICE = 232, // Teletext key. // * Displays Teletext service. TV_TELETEXT = 233, // Number entry key. // * Initiates to enter multi-digit channel nubmber when each digit key is assigned // * for selecting separate channel. Corresponds to Number Entry Mode (0x1D) of CEC // * User Control Code. TV_NUMBER_ENTRY = 234, // Analog Terrestrial key. // * Switches to analog terrestrial broadcast service. TV_TERRESTRIAL_ANALOG = 235, // Digital Terrestrial key. // * Switches to digital terrestrial broadcast service. TV_TERRESTRIAL_DIGITAL = 236, // Satellite key. // * Switches to digital satellite broadcast service. TV_SATELLITE = 237, // BS key. // * Switches to BS digital satellite broadcasting service available in Japan. TV_SATELLITE_BS = 238, // CS key. // * Switches to CS digital satellite broadcasting service available in Japan. TV_SATELLITE_CS = 239, // BS/CS key. // * Toggles between BS and CS digital satellite services. TV_SATELLITE_SERVICE = 240, // Toggle Network key. // * Toggles selecting broacast services. TV_NETWORK = 241, // Antenna/Cable key. // * Toggles broadcast input source between antenna and cable. TV_ANTENNA_CABLE = 242, // HDMI #1 key. // * Switches to HDMI input #1. TV_INPUT_HDMI_1 = 243, // HDMI #2 key. // * Switches to HDMI input #2. TV_INPUT_HDMI_2 = 244, // HDMI #3 key. // * Switches to HDMI input #3. TV_INPUT_HDMI_3 = 245, // HDMI #4 key. // * Switches to HDMI input #4. TV_INPUT_HDMI_4 = 246, // Composite #1 key. // * Switches to composite video input #1. TV_INPUT_COMPOSITE_1 = 247, // Composite #2 key. // * Switches to composite video input #2. TV_INPUT_COMPOSITE_2 = 248, // Component #1 key. // * Switches to component video input #1. TV_INPUT_COMPONENT_1 = 249, // Component #2 key. // * Switches to component video input #2. TV_INPUT_COMPONENT_2 = 250, // VGA #1 key. // * Switches to VGA (analog RGB) input #1. TV_INPUT_VGA_1 = 251, // Audio description key. // * Toggles audio description off / on. TV_AUDIO_DESCRIPTION = 252, // Audio description mixing volume up key. // * Louden audio description volume as compared with normal audio volume. TV_AUDIO_DESCRIPTION_MIX_UP = 253, // Audio description mixing volume down key. // * Lessen audio description volume as compared with normal audio volume. TV_AUDIO_DESCRIPTION_MIX_DOWN = 254, // Zoom mode key. // * Changes Zoom mode (Normal, Full, Zoom, Wide-zoom, etc.) TV_ZOOM_MODE = 255, // Contents menu key. // * Goes to the title list. Corresponds to Contents Menu (0x0B) of CEC User Control // * Code TV_CONTENTS_MENU = 256, // Media context menu key. // * Goes to the context menu of media contents. Corresponds to Media Context-sensitive // * Menu (0x11) of CEC User Control Code. TV_MEDIA_CONTEXT_MENU = 257, // Timer programming key. // * Goes to the timer recording menu. Corresponds to Timer Programming (0x54) of // * CEC User Control Code. TV_TIMER_PROGRAMMING = 258, // Help key. HELP = 259, NAVIGATE_PREVIOUS = 260, NAVIGATE_NEXT = 261, NAVIGATE_IN = 262, NAVIGATE_OUT = 263, // Primary stem key for Wear // * Main power/reset button on watch. STEM_PRIMARY = 264, // Generic stem key 1 for Wear STEM_1 = 265, // Generic stem key 2 for Wear STEM_2 = 266, // Generic stem key 3 for Wear STEM_3 = 267, // Directional Pad Up-Left DPAD_UP_LEFT = 268, // Directional Pad Down-Left DPAD_DOWN_LEFT = 269, // Directional Pad Up-Right DPAD_UP_RIGHT = 270, // Directional Pad Down-Right DPAD_DOWN_RIGHT = 271, // Skip forward media key MEDIA_SKIP_FORWARD = 272, // Skip backward media key MEDIA_SKIP_BACKWARD = 273, // Step forward media key. // * Steps media forward one from at a time. MEDIA_STEP_FORWARD = 274, // Step backward media key. // * Steps media backward one from at a time. MEDIA_STEP_BACKWARD = 275, // Put device to sleep unless a wakelock is held. SOFT_SLEEP = 276, // Cut key. CUT = 277, // Copy key. COPY = 278, // Paste key. PASTE = 279, // fingerprint navigation key, up. SYSTEM_NAVIGATION_UP = 280, // fingerprint navigation key, down. SYSTEM_NAVIGATION_DOWN = 281, // fingerprint navigation key, left. SYSTEM_NAVIGATION_LEFT = 282, // fingerprint navigation key, right. SYSTEM_NAVIGATION_RIGHT = 283, // all apps ALL_APPS = 284, // refresh key REFRESH = 285, // Thumbs up key. Apps can use this to let user upvote content. THUMBS_UP = 286, // Thumbs down key. Apps can use this to let user downvote content. THUMBS_DOWN = 287, // Used to switch current account that is consuming content. // * May be consumed by system to switch current viewer profile. PROFILE_SWITCH = 288, // Video Application key #1. VIDEO_APP_1 = 289, // Video Application key #2. VIDEO_APP_2 = 290, // Video Application key #3. VIDEO_APP_3 = 291, // Video Application key #4. VIDEO_APP_4 = 292, // Video Application key #5. VIDEO_APP_5 = 293, // Video Application key #6. VIDEO_APP_6 = 294, // Video Application key #7. VIDEO_APP_7 = 295, // Video Application key #8. VIDEO_APP_8 = 296, // Featured Application key #1. FEATURED_APP_1 = 297, // Featured Application key #2. FEATURED_APP_2 = 298, // Featured Application key #3. FEATURED_APP_3 = 299, // Featured Application key #4. FEATURED_APP_4 = 300, // Demo Application key #1. DEMO_APP_1 = 301, // Demo Application key #2. DEMO_APP_2 = 302, // Demo Application key #3. DEMO_APP_3 = 303, // Demo Application key #4. DEMO_APP_4 = 304, // Keyboard backlight Down key. // * Adjusts the keyboard backlight brightness down. KEYBOARD_BACKLIGHT_DOWN = 305, // Keyboard backlight Up key. // * Adjusts the keyboard backlight brightness up. KEYBOARD_BACKLIGHT_UP = 306, // Keyboard backlight Toggle key. // * Toggles the keyboard backlight on/off. KEYBOARD_BACKLIGHT_TOGGLE = 307, // The primary button on the barrel of a stylus. // * This is usually the button closest to the tip of the stylus. STYLUS_BUTTON_PRIMARY = 308, // The secondary button on the barrel of a stylus. // * This is usually the second button from the tip of the stylus. STYLUS_BUTTON_SECONDARY = 309, // The tertiary button on the barrel of a stylus. // * This is usually the third button from the tip of the stylus. STYLUS_BUTTON_TERTIARY = 310, // A button on the tail end of a stylus. STYLUS_BUTTON_TAIL = 311, // Key to open recent apps (a.k.a. Overview) RECENT_APPS = 312, // User customizable key #1. MACRO_1 = 313, // User customizable key #2. MACRO_2 = 314, // User customizable key #3. MACRO_3 = 315, // User customizable key #4. MACRO_4 = 316, // Open Emoji picker EMOJI_PICKER = 317, // Take Screenshot SCREENSHOT = 318, }
Related Procedures With Returns
LogId ¶
LogId :: enum i32 { MIN = 0, // The main log buffer. This is the only log buffer available to apps. MAIN = 0, // The radio log buffer. RADIO = 1, // The event log buffer. EVENTS = 2, // The system log buffer. SYSTEM = 3, // The crash log buffer. CRASH = 4, // The statistics log buffer. STATS = 5, // The security log buffer. SECURITY = 6, // The kernel log buffer. KERNEL = 7, MAX, // Let the logging function choose the best log target. DEFAULT = 2147483647, }
* * Identifies a specific log buffer for __android_log_buf_write() * and __android_log_buf_print().
LogMessage ¶
LogMessage :: struct { // Must be set to sizeof(LogMessage) and is used for versioning. struct_size: uint, // {@link log_id_t} values. buffer_id: i32, // {@link android_LogPriority} values. priority: i32, // The tag for the log message. tag: cstring, // Optional file name, may be set to nullptr. file: cstring, // Optional line number, ignore if file is nullptr. line: u32, // The log message itself. message: cstring, }
* * Logger data struct used for writing log messages to liblog via __android_log_write_logger_data() * and sending log messages to user defined loggers specified in __android_log_set_logger().
LogPriority ¶
LogPriority :: enum i32 { // For internal use only. UNKNOWN = 0, // The default priority, for internal use only. DEFAULT, // only for SetMinPriority() // Verbose logging. Should typically be disabled for a release apk. VERBOSE, // Debug logging. Should typically be disabled for a release apk. DEBUG, // Informational logging. Should typically be disabled for a release apk. INFO, // Warning logging. For use with recoverable failures. WARN, // Error logging. For use with unrecoverable failures. ERROR, // Fatal logging. For use when aborting. FATAL, // For internal use only. SILENT, // only for SetMinPriority(); must be last }
* * Android log priority values, in increasing order of priority.
MetaKeyStateBits ¶
MetaKeyStateBits :: enum i32 { ALT_ON = 1, ALT_LEFT_ON = 4, ALT_RIGHT_ON = 5, SHIFT_ON = 0, SHIFT_LEFT_ON = 6, SHIFT_RIGHT_ON = 7, SYM_ON = 2, FUNCTION_ON = 3, CTRL_ON = 12, CTRL_LEFT_ON = 13, CTRL_RIGHT_ON = 14, META_ON = 16, META_LEFT_ON = 17, META_RIGHT_ON = 18, CAPS_LOCK_ON = 20, NUM_LOCK_ON = 21, SCROLL_LOCK_ON = 22, }
* * Meta key / modifier state.
MotionEventAction ¶
MotionEventAction :: distinct bit_field i32 { action: MotionEventActionEnum | 8, pointer_index: u8 | 8, reserved: i16 | 16, }
0xRRRRIIAA R = reserved I = pointer index A = action Odin parses the bit_fields as LSB
Related Procedures With Returns
MotionEventActionEnum ¶
MotionEventActionEnum :: enum u8 { // A pressed gesture has started, the motion contains the initial starting location. DOWN = 0, // * // * A pressed gesture has finished, the motion contains the final release location // * as well as any intermediate points since the last down or move event. UP = 1, // * // * A change has happened during a press gesture (between #AMOTION_EVENT_ACTION_DOWN and // * #AMOTION_EVENT_ACTION_UP). The motion contains the most recent point, as well as // * any intermediate points since the last down or move event. MOVE = 2, // * // * The current gesture has been aborted. // * You will not receive any more points in it. You should treat this as // * an up event, but not perform any action that you normally would. CANCEL = 3, // * // * A movement has happened outside of the normal bounds of the UI element. // * This does not provide a full gesture, but only the initial location of the movement/touch. OUTSIDE = 4, // * // * A non-primary pointer has gone down. // * The bits in #AMOTION_EVENT_ACTION_POINTER_INDEX_MASK indicate which pointer changed. POINTER_DOWN = 5, // * // * A non-primary pointer has gone up. // * The bits in #AMOTION_EVENT_ACTION_POINTER_INDEX_MASK indicate which pointer changed. POINTER_UP = 6, // * // * A change happened but the pointer is not down (unlike #AMOTION_EVENT_ACTION_MOVE). // * The motion contains the most recent point, as well as any intermediate points since // * the last hover move event. HOVER_MOVE = 7, // * // * The motion event contains relative vertical and/or horizontal scroll offsets. // * Use {@link AMotionEvent_getAxisValue} to retrieve the information from // * #AMOTION_EVENT_AXIS_VSCROLL and #AMOTION_EVENT_AXIS_HSCROLL. // * The pointer may or may not be down when this event is dispatched. // * This action is always delivered to the winder under the pointer, which // * may not be the window currently touched. SCROLL = 8, // The pointer is not down but has entered the boundaries of a window or view. HOVER_ENTER = 9, // The pointer is not down but has exited the boundaries of a window or view. HOVER_EXIT = 10, // One or more buttons have been pressed. BUTTON_PRESS = 11, // One or more buttons have been released. BUTTON_RELEASE = 12, }
MotionEventAxis ¶
MotionEventAxis :: enum i32 { // * // * Axis constant: X axis of a motion event. // * // * - For a touch screen, reports the absolute X screen position of the center of // * the touch contact area. The units are display pixels. // * - For a touch pad, reports the absolute X surface position of the center of the touch // * contact area. The units are device-dependent. // * - For a mouse, reports the absolute X screen position of the mouse pointer. // * The units are display pixels. // * - For a trackball, reports the relative horizontal displacement of the trackball. // * The value is normalized to a range from -1.0 (left) to 1.0 (right). // * - For a joystick, reports the absolute X position of the joystick. // * The value is normalized to a range from -1.0 (left) to 1.0 (right). X = 0, // * // * Axis constant: Y axis of a motion event. // * // * - For a touch screen, reports the absolute Y screen position of the center of // * the touch contact area. The units are display pixels. // * - For a touch pad, reports the absolute Y surface position of the center of the touch // * contact area. The units are device-dependent. // * - For a mouse, reports the absolute Y screen position of the mouse pointer. // * The units are display pixels. // * - For a trackball, reports the relative vertical displacement of the trackball. // * The value is normalized to a range from -1.0 (up) to 1.0 (down). // * - For a joystick, reports the absolute Y position of the joystick. // * The value is normalized to a range from -1.0 (up or far) to 1.0 (down or near). Y = 1, // * // * Axis constant: Pressure axis of a motion event. // * // * - For a touch screen or touch pad, reports the approximate pressure applied to the surface // * by a finger or other tool. The value is normalized to a range from // * 0 (no pressure at all) to 1 (normal pressure), although values higher than 1 // * may be generated depending on the calibration of the input device. // * - For a trackball, the value is set to 1 if the trackball button is pressed // * or 0 otherwise. // * - For a mouse, the value is set to 1 if the primary mouse button is pressed // * or 0 otherwise. PRESSURE = 2, // * // * Axis constant: Size axis of a motion event. // * // * - For a touch screen or touch pad, reports the approximate size of the contact area in // * relation to the maximum detectable size for the device. The value is normalized // * to a range from 0 (smallest detectable size) to 1 (largest detectable size), // * although it is not a linear scale. This value is of limited use. // * To obtain calibrated size information, see // * {@link AMOTION_EVENT_AXIS_TOUCH_MAJOR} or {@link AMOTION_EVENT_AXIS_TOOL_MAJOR}. SIZE = 3, // * // * Axis constant: TouchMajor axis of a motion event. // * // * - For a touch screen, reports the length of the major axis of an ellipse that // * represents the touch area at the point of contact. // * The units are display pixels. // * - For a touch pad, reports the length of the major axis of an ellipse that // * represents the touch area at the point of contact. // * The units are device-dependent. TOUCH_MAJOR = 4, // * // * Axis constant: TouchMinor axis of a motion event. // * // * - For a touch screen, reports the length of the minor axis of an ellipse that // * represents the touch area at the point of contact. // * The units are display pixels. // * - For a touch pad, reports the length of the minor axis of an ellipse that // * represents the touch area at the point of contact. // * The units are device-dependent. // * // * When the touch is circular, the major and minor axis lengths will be equal to one another. TOUCH_MINOR = 5, // * // * Axis constant: ToolMajor axis of a motion event. // * // * - For a touch screen, reports the length of the major axis of an ellipse that // * represents the size of the approaching finger or tool used to make contact. // * - For a touch pad, reports the length of the major axis of an ellipse that // * represents the size of the approaching finger or tool used to make contact. // * The units are device-dependent. // * // * When the touch is circular, the major and minor axis lengths will be equal to one another. // * // * The tool size may be larger than the touch size since the tool may not be fully // * in contact with the touch sensor. TOOL_MAJOR = 6, // * // * Axis constant: ToolMinor axis of a motion event. // * // * - For a touch screen, reports the length of the minor axis of an ellipse that // * represents the size of the approaching finger or tool used to make contact. // * - For a touch pad, reports the length of the minor axis of an ellipse that // * represents the size of the approaching finger or tool used to make contact. // * The units are device-dependent. // * // * When the touch is circular, the major and minor axis lengths will be equal to one another. // * // * The tool size may be larger than the touch size since the tool may not be fully // * in contact with the touch sensor. TOOL_MINOR = 7, // * // * Axis constant: Orientation axis of a motion event. // * // * - For a touch screen or touch pad, reports the orientation of the finger // * or tool in radians relative to the vertical plane of the device. // * An angle of 0 radians indicates that the major axis of contact is oriented // * upwards, is perfectly circular or is of unknown orientation. A positive angle // * indicates that the major axis of contact is oriented to the right. A negative angle // * indicates that the major axis of contact is oriented to the left. // * The full range is from -PI/2 radians (finger pointing fully left) to PI/2 radians // * (finger pointing fully right). // * - For a stylus, the orientation indicates the direction in which the stylus // * is pointing in relation to the vertical axis of the current orientation of the screen. // * The range is from -PI radians to PI radians, where 0 is pointing up, // * -PI/2 radians is pointing left, -PI or PI radians is pointing down, and PI/2 radians // * is pointing right. See also #AMOTION_EVENT_AXIS_TILT. ORIENTATION = 8, // * // * Axis constant: Vertical Scroll axis of a motion event. // * // * - For a mouse, reports the relative movement of the vertical scroll wheel. // * The value is normalized to a range from -1.0 (down) to 1.0 (up). // * // * This axis should be used to scroll views vertically. VSCROLL = 9, // * // * Axis constant: Horizontal Scroll axis of a motion event. // * // * - For a mouse, reports the relative movement of the horizontal scroll wheel. // * The value is normalized to a range from -1.0 (left) to 1.0 (right). // * // * This axis should be used to scroll views horizontally. HSCROLL = 10, // * // * Axis constant: Z axis of a motion event. // * // * - For a joystick, reports the absolute Z position of the joystick. // * The value is normalized to a range from -1.0 (high) to 1.0 (low). // * <em>On game pads with two analog joysticks, this axis is often reinterpreted // * to report the absolute X position of the second joystick instead.</em> Z = 11, // * // * Axis constant: X Rotation axis of a motion event. // * // * - For a joystick, reports the absolute rotation angle about the X axis. // * The value is normalized to a range from -1.0 (counter-clockwise) to 1.0 (clockwise). RX = 12, // * // * Axis constant: Y Rotation axis of a motion event. // * // * - For a joystick, reports the absolute rotation angle about the Y axis. // * The value is normalized to a range from -1.0 (counter-clockwise) to 1.0 (clockwise). RY = 13, // * // * Axis constant: Z Rotation axis of a motion event. // * // * - For a joystick, reports the absolute rotation angle about the Z axis. // * The value is normalized to a range from -1.0 (counter-clockwise) to 1.0 (clockwise). // * On game pads with two analog joysticks, this axis is often reinterpreted // * to report the absolute Y position of the second joystick instead. RZ = 14, // * // * Axis constant: Hat X axis of a motion event. // * // * - For a joystick, reports the absolute X position of the directional hat control. // * The value is normalized to a range from -1.0 (left) to 1.0 (right). HAT_X = 15, // * // * Axis constant: Hat Y axis of a motion event. // * // * - For a joystick, reports the absolute Y position of the directional hat control. // * The value is normalized to a range from -1.0 (up) to 1.0 (down). HAT_Y = 16, // * // * Axis constant: Left Trigger axis of a motion event. // * // * - For a joystick, reports the absolute position of the left trigger control. // * The value is normalized to a range from 0.0 (released) to 1.0 (fully pressed). LTRIGGER = 17, // * // * Axis constant: Right Trigger axis of a motion event. // * // * - For a joystick, reports the absolute position of the right trigger control. // * The value is normalized to a range from 0.0 (released) to 1.0 (fully pressed). RTRIGGER = 18, // * // * Axis constant: Throttle axis of a motion event. // * // * - For a joystick, reports the absolute position of the throttle control. // * The value is normalized to a range from 0.0 (fully open) to 1.0 (fully closed). THROTTLE = 19, // * // * Axis constant: Rudder axis of a motion event. // * // * - For a joystick, reports the absolute position of the rudder control. // * The value is normalized to a range from -1.0 (turn left) to 1.0 (turn right). RUDDER = 20, // * // * Axis constant: Wheel axis of a motion event. // * // * - For a joystick, reports the absolute position of the steering wheel control. // * The value is normalized to a range from -1.0 (turn left) to 1.0 (turn right). WHEEL = 21, // * // * Axis constant: Gas axis of a motion event. // * // * - For a joystick, reports the absolute position of the gas (accelerator) control. // * The value is normalized to a range from 0.0 (no acceleration) // * to 1.0 (maximum acceleration). GAS = 22, // * // * Axis constant: Brake axis of a motion event. // * // * - For a joystick, reports the absolute position of the brake control. // * The value is normalized to a range from 0.0 (no braking) to 1.0 (maximum braking). BRAKE = 23, // * // * Axis constant: Distance axis of a motion event. // * // * - For a stylus, reports the distance of the stylus from the screen. // * A value of 0.0 indicates direct contact and larger values indicate increasing // * distance from the surface. DISTANCE = 24, // * // * Axis constant: Tilt axis of a motion event. // * // * - For a stylus, reports the tilt angle of the stylus in radians where // * 0 radians indicates that the stylus is being held perpendicular to the // * surface, and PI/2 radians indicates that the stylus is being held flat // * against the surface. TILT = 25, // * // * Axis constant: Generic scroll axis of a motion event. // * // * - This is used for scroll axis motion events that can't be classified as strictly // * vertical or horizontal. The movement of a rotating scroller is an example of this. SCROLL = 26, // * // * Axis constant: The movement of x position of a motion event. // * // * - For a mouse, reports a difference of x position between the previous position. // * This is useful when pointer is captured, in that case the mouse pointer doesn't // * change the location but this axis reports the difference which allows the app // * to see how the mouse is moved. RELATIVE_X = 27, // * // * Axis constant: The movement of y position of a motion event. // * // * Same as #AMOTION_EVENT_AXIS_RELATIVE_X, but for y position. RELATIVE_Y = 28, // * // * Axis constant: Generic 1 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_1 = 32, // * // * Axis constant: Generic 2 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_2 = 33, // * // * Axis constant: Generic 3 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_3 = 34, // * // * Axis constant: Generic 4 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_4 = 35, // * // * Axis constant: Generic 5 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_5 = 36, // * // * Axis constant: Generic 6 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_6 = 37, // * // * Axis constant: Generic 7 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_7 = 38, // * // * Axis constant: Generic 8 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_8 = 39, // * // * Axis constant: Generic 9 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_9 = 40, // * // * Axis constant: Generic 10 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_10 = 41, // * // * Axis constant: Generic 11 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_11 = 42, // * // * Axis constant: Generic 12 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_12 = 43, // * // * Axis constant: Generic 13 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_13 = 44, // * // * Axis constant: Generic 14 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_14 = 45, // * // * Axis constant: Generic 15 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_15 = 46, // * // * Axis constant: Generic 16 axis of a motion event. // * The interpretation of a generic axis is device-specific. GENERIC_16 = 47, }
* * Constants that identify each individual axis of a motion event. * @anchor AMOTION_EVENT_AXIS
Related Procedures With Parameters
MotionEventButton ¶
MotionEventButton :: enum int { PRIMARY = 1, SECONDARY = 2, TERTIARY = 4, BACK = 8, FORWARD = 16, STYLUS_PRIMARY = 32, STYLUS_SECONDARY = 64, }
* * Constants that identify buttons that are associated with motion events. * Refer to the documentation on the MotionEvent class for descriptions of each button.
TODO: bit_set this?
Related Procedures With Returns
MotionEventEdgeFlags ¶
MotionEventEdgeFlags :: bit_set[MotionEventEdgeFlagsBits; i32]
Related Procedures With Returns
MotionEventEdgeFlagsBits ¶
MotionEventEdgeFlagsBits :: enum i32 { // Flag indicating the motion event intersected the top edge of the screen. TOP = 0, // Flag indicating the motion event intersected the bottom edge of the screen. BOTTOM = 1, // Flag indicating the motion event intersected the left edge of the screen. LEFT = 2, // Flag indicating the motion event intersected the right edge of the screen. RIGHT = 3, }
* * Motion event edge touch flags.
MotionEventFlags ¶
MotionEventFlags :: bit_set[MotionEventFlagsBits; i32]
Related Procedures With Returns
MotionEventFlagsBits ¶
MotionEventFlagsBits :: enum i32 { // * // * This flag indicates that the window that received this motion event is partly // * or wholly obscured by another visible window above it. This flag is set to true // * even if the event did not directly pass through the obscured area. // * A security sensitive application can check this flag to identify situations in which // * a malicious application may have covered up part of its content for the purpose // * of misleading the user or hijacking touches. An appropriate response might be // * to drop the suspect touches or to take additional precautions to confirm the user's // * actual intent. WINDOW_IS_OBSCURED = 0, }
* * Motion event flags.
MotionRange ¶
MotionRange :: enum int { X = 0, Y = 1, PRESSURE = 2, SIZE = 3, TOUCH_MAJOR = 4, TOUCH_MINOR = 5, TOOL_MAJOR = 6, TOOL_MINOR = 7, ORIENTATION = 8, }
* * Constants used to retrieve information about the range of motion for a particular * coordinate of a motion event. * * Refer to the documentation on android.view.InputDevice for more details about input sources * and their correct interpretation. * @deprecated These constants are deprecated. Use {@link AMOTION_EVENT_AXIS AMOTION_EVENT_AXIS_} * constants instead.
TODO: should I just delete this ??
NNResultCode ¶
NNResultCode :: enum i32 { // * // * Operation was successful. NO_ERROR = 0, // * // * Failure caused by not enough available memory. OUT_OF_MEMORY = 1, INCOMPLETE = 2, // * // * Failure caused by unexpected null argument. UNEXPECTED_NULL = 3, // * // * Failure caused by invalid function arguments, invalid model definition, // * invalid execution definition or invalid data at execution time. BAD_DATA = 4, // * // * Failure caused by failed model execution. OP_FAILED = 5, // * // * Failure caused by object being in the wrong state. BAD_STATE = 6, // * // * Failure caused by not being able to map a file into memory. // * This may be caused by a file descriptor not being mappable, or an AHardwareBuffer // * not supported by the device. // * Mitigate by reading its content into memory. UNMAPPABLE = 7, // * // * Failure caused by insufficient buffer size provided to a model output. OUTPUT_INSUFFICIENT_SIZE = 8, // * // * Failure caused by a device not being available. UNAVAILABLE_DEVICE = 9, // * // * Failure because a deadline could not be met for a task, but future // * deadlines may still be met for the same task after a short delay. // * // * Available since NNAPI feature level 4. MISSED_DEADLINE_TRANSIENT = 10, // * // * Failure because a deadline could not be met for a task, and future // * deadlines will likely also not be met for the same task even after a // * short delay. // * // * Available since NNAPI feature level 4. MISSED_DEADLINE_PERSISTENT = 11, // * // * Failure because of a resource limitation within the driver, but future // * calls for the same task may still succeed after a short delay. // * // * Available since NNAPI feature level 4. RESOURCE_EXHAUSTED_TRANSIENT = 12, // * // * Failure because of a resource limitation within the driver, and future // * calls for the same task will likely also fail even after a short // * delay. // * // * Available since NNAPI feature level 4. RESOURCE_EXHAUSTED_PERSISTENT = 13, // * // * Failure indicating an object is in a dead state. // * // * Available since NNAPI feature level 4. DEAD_OBJECT = 14, }
* * Result codes. * * <p>Any NNAPI function can return any result code, including result codes not * currently documented. Any value other than {@link ANEURALNETWORKS_NO_ERROR} * indicates a failure of some kind.</p> * * <p>Additional information about the nature of a failure can be obtained from * the device log after enabling NNAPI debugging by setting the debug.nn.vlog * property to 1, e.g., by calling "adb shell setprop debug.nn.vlog 1".</p> * * Available since NNAPI feature level 1.
Related Procedures With Returns
- ANeuralNetworksBurst_create
- ANeuralNetworksCompilation_create
- ANeuralNetworksCompilation_createForDevices
- ANeuralNetworksCompilation_finish
- ANeuralNetworksCompilation_getPreferredMemoryAlignmentForInput
- ANeuralNetworksCompilation_getPreferredMemoryAlignmentForOutput
- ANeuralNetworksCompilation_getPreferredMemoryPaddingForInput
- ANeuralNetworksCompilation_getPreferredMemoryPaddingForOutput
- ANeuralNetworksCompilation_setCaching
- ANeuralNetworksCompilation_setPreference
- ANeuralNetworksCompilation_setPriority
- ANeuralNetworksCompilation_setTimeout
- ANeuralNetworksDevice_getFeatureLevel
- ANeuralNetworksDevice_getName
- ANeuralNetworksDevice_getType
- ANeuralNetworksDevice_getVersion
- ANeuralNetworksDevice_wait
- ANeuralNetworksEvent_createFromSyncFenceFd
- ANeuralNetworksEvent_getSyncFenceFd
- ANeuralNetworksEvent_wait
- ANeuralNetworksExecution_burstCompute
- ANeuralNetworksExecution_compute
- ANeuralNetworksExecution_create
- ANeuralNetworksExecution_enableInputAndOutputPadding
- ANeuralNetworksExecution_getDuration
- ANeuralNetworksExecution_getOutputOperandDimensions
- ANeuralNetworksExecution_getOutputOperandRank
- ANeuralNetworksExecution_setInput
- ANeuralNetworksExecution_setInputFromMemory
- ANeuralNetworksExecution_setLoopTimeout
- ANeuralNetworksExecution_setMeasureTiming
- ANeuralNetworksExecution_setOutput
- ANeuralNetworksExecution_setOutputFromMemory
- ANeuralNetworksExecution_setReusable
- ANeuralNetworksExecution_setTimeout
- ANeuralNetworksExecution_startCompute
- ANeuralNetworksExecution_startComputeWithDependencies
- ANeuralNetworksMemoryDesc_addInputRole
- ANeuralNetworksMemoryDesc_addOutputRole
- ANeuralNetworksMemoryDesc_create
- ANeuralNetworksMemoryDesc_finish
- ANeuralNetworksMemoryDesc_setDimensions
- ANeuralNetworksMemory_copy
- ANeuralNetworksMemory_createFromAHardwareBuffer
- ANeuralNetworksMemory_createFromDesc
- ANeuralNetworksMemory_createFromFd
- ANeuralNetworksModel_addOperand
- ANeuralNetworksModel_addOperation
- ANeuralNetworksModel_create
- ANeuralNetworksModel_finish
- ANeuralNetworksModel_getSupportedOperationsForDevices
- ANeuralNetworksModel_identifyInputsAndOutputs
- ANeuralNetworksModel_relaxComputationFloat32toFloat16
- ANeuralNetworksModel_setOperandSymmPerChannelQuantParams
- ANeuralNetworksModel_setOperandValue
- ANeuralNetworksModel_setOperandValueFromMemory
- ANeuralNetworksModel_setOperandValueFromModel
- ANeuralNetworks_getDevice
- ANeuralNetworks_getDeviceCount
OBBState ¶
OBBState :: enum i32 { // * // * The OBB container is now mounted and ready for use. Can be returned // * as the status for callbacks made during asynchronous OBB actions. MOUNTED = 1, // * // * The OBB container is now unmounted and not usable. Can be returned // * as the status for callbacks made during asynchronous OBB actions. UNMOUNTED = 2, // * // * There was an internal system error encountered while trying to // * mount the OBB. Can be returned as the status for callbacks made // * during asynchronous OBB actions. ERROR_INTERNAL = 20, // * // * The OBB could not be mounted by the system. Can be returned as the // * status for callbacks made during asynchronous OBB actions. ERROR_COULD_NOT_MOUNT = 21, // * // * The OBB could not be unmounted. This most likely indicates that a // * file is in use on the OBB. Can be returned as the status for // * callbacks made during asynchronous OBB actions. ERROR_COULD_NOT_UNMOUNT = 22, // * // * A call was made to unmount the OBB when it was not mounted. Can be // * returned as the status for callbacks made during asynchronous OBB // * actions. ERROR_NOT_MOUNTED = 23, // * // * The OBB has already been mounted. Can be returned as the status for // * callbacks made during asynchronous OBB actions. ERROR_ALREADY_MOUNTED = 24, // * // * The current application does not have permission to use this OBB. // * This could be because the OBB indicates it's owned by a different // * package. Can be returned as the status for callbacks made during // * asynchronous OBB actions. ERROR_PERMISSION_DENIED = 25, }
* * The different states of a OBB storage passed to AStorageManager_obbCallbackFunc().
ObbFlags ¶
ObbFlags :: enum i32 { OVERLAY = 1, }
Flag for an obb file, returned by AObbInfo_getFlags().
Related Procedures With Returns
OperandCode ¶
OperandCode :: enum i32 { // A 32 bit floating point scalar value. FLOAT32 = 0, // A signed 32 bit integer scalar value. INT32 = 1, // An unsigned 32 bit integer scalar value. UINT32 = 2, // A tensor of 32 bit floating point values. TENSOR_FLOAT32 = 3, // A tensor of 32 bit integer values. TENSOR_INT32 = 4, // * // * A tensor of 8 bit unsigned integers that represent real numbers. // * // * Attached to this tensor are two numbers that can be used to convert the // * 8 bit integer to the real value and vice versa. These two numbers are: // * - scale: a 32 bit floating point value greater than zero. // * - zeroPoint: a 32 bit integer, in range [0, 255]. // * // * The formula is: // * real_value = (integer_value - zeroPoint) * scale. TENSOR_QUANT8_ASYMM = 5, // * // * An 8 bit boolean scalar value. // * // * Values of this operand type are either true or false. A zero value // * represents false; any other value represents true. // * // * Available since NNAPI feature level 3. BOOL = 6, // * // * A tensor of 16 bit signed integers that represent real numbers. // * // * Attached to this tensor is a number representing real value scale that is // * used to convert the 16 bit number to a real value in the following way: // * realValue = integerValue * scale. // * // * scale is a 32 bit floating point with value greater than zero. // * // * Available since NNAPI feature level 3. TENSOR_QUANT16_SYMM = 7, // * // * A tensor of IEEE 754 16 bit floating point values. // * // * Available since NNAPI feature level 3. TENSOR_FLOAT16 = 8, // * // * A tensor of 8 bit boolean values. // * // * Values of this operand type are either true or false. A zero value // * represents false; any other value represents true. // * // * Available since NNAPI feature level 3. TENSOR_BOOL8 = 9, // * // * An IEEE 754 16 bit floating point scalar value. // * // * Available since NNAPI feature level 3. FLOAT16 = 10, // * // * A tensor of 8 bit signed integers that represent real numbers. // * // * This tensor is associated with additional fields that can // * be used to convert the 8 bit signed integer to the real value and vice versa. // * These fields are: // * - channelDim: a 32 bit unsigned integer indicating channel dimension. // * - scales: an array of positive 32 bit floating point values. // * The size of the scales array must be equal to dimensions[channelDim]. // * // * {@link ANeuralNetworksModel_setOperandSymmPerChannelQuantParams} must be used // * to set the parameters for an Operand of this type. // * // * The channel dimension of this tensor must not be unknown (dimensions[channelDim] != 0). // * // * The formula is: // * realValue[..., C, ...] = // * integerValue[..., C, ...] * scales[C] // * where C is an index in the Channel dimension. // * // * Available since NNAPI feature level 3. TENSOR_QUANT8_SYMM_PER_CHANNEL = 11, // * // * A tensor of 16 bit unsigned integers that represent real numbers. // * // * Attached to this tensor are two numbers that can be used to convert the // * 16 bit integer to the real value and vice versa. These two numbers are: // * - scale: a 32 bit floating point value greater than zero. // * - zeroPoint: a 32 bit integer, in range [0, 65535]. // * // * The formula is: // * real_value = (integer_value - zeroPoint) * scale. // * // * Available since NNAPI feature level 3. TENSOR_QUANT16_ASYMM = 12, // * // * A tensor of 8 bit signed integers that represent real numbers. // * // * Attached to this tensor is a number representing real value scale that is // * used to convert the 8 bit number to a real value in the following way: // * realValue = integerValue * scale. // * // * scale is a 32 bit floating point with value greater than zero. // * // * Available since NNAPI feature level 3. TENSOR_QUANT8_SYMM = 13, // * // * A tensor of 8 bit signed integers that represent real numbers. // * // * Attached to this tensor are two numbers that can be used to convert the // * 8 bit integer to the real value and vice versa. These two numbers are: // * - scale: a 32 bit floating point value greater than zero. // * - zeroPoint: a 32 bit integer, in range [-128, 127]. // * // * The formula is: // * real_value = (integer_value - zeroPoint) * scale. // * // * Available since NNAPI feature level 4. TENSOR_QUANT8_ASYMM_SIGNED = 14, // * // * A reference to a model. // * // * {@link ANeuralNetworksModel_setOperandValueFromModel} must be used to set // * the value for an Operand of this type. // * // * Available since NNAPI feature level 4. MODEL = 15, }
* * Operand types. * * The type of an operand in a model. * Types prefaced with ANEURALNETWORKS_TENSOR_ must be used for tensor data (i.e., tensors with at least one dimension). Types not prefaced by ANEURALNETWORKS_TENSOR_ represent * scalar values and must have no dimensions. * * Although we define many types, most operators accept just a few * types. Most used are {@link ANEURALNETWORKS_TENSOR_FLOAT32}, * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, * and {@link ANEURALNETWORKS_INT32}. * * Available since NNAPI feature level 1.
OperationCode ¶
OperationCode :: enum i32 { // * // * Adds two tensors, element-wise. // * // * Takes two input tensors of identical {@link OperandCode} and compatible // * dimensions. The output is the sum of both input tensors, optionally // * modified by an activation function. // * // * Two dimensions are compatible when: // * 1. they are equal, or // * 2. one of them is 1 // * // * The size of the output is the maximum size along each dimension of the // * input operands. It starts with the trailing dimensions, and works its // * way forward. // * // * Example: // * // * input1.dimension = {4, 1, 2} // * input2.dimension = {5, 4, 3, 1} // * output.dimension = {5, 4, 3, 2} // * // * Since NNAPI feature level 3, generic zero-sized input tensor is supported. Zero // * dimension is only compatible with 0 or 1. The size of the output // * dimension is zero if either of corresponding input dimension is zero. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * * {@link ANEURALNETWORKS_TENSOR_INT32} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: A tensor. // * * 1: A tensor of the same {@link OperandCode}, and compatible dimensions // * as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scales and zeroPoint can be different from input0 scale and zeroPoint. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * For a {@link ANEURALNETWORKS_TENSOR_INT32} tensor, // * the {@link FuseCode} must be "NONE". // * // * Outputs: // * * 0: The sum, a tensor of the same {@link OperandCode} as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint can be different from inputs' scale and zeroPoint. // * // * Available since NNAPI feature level 1. ADD = 0, // * // * Performs a 2-D average pooling operation. // * // * The output dimensions are functions of the filter dimensions, stride, and // * padding. // * // * The values in the output tensor are computed as: // * // * output[b, i, j, channel] = // * sum_{di, dj}( // * input[b, strides[1] * i + di, strides[2] * j + dj, channel] // * ) / sum(1) // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * NCHW is supported since NNAPI feature level 3. // * // * Both explicit padding and implicit padding are supported. // * // * Inputs (explicit padding): // * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying // * the input. // * Since NNAPI feature level 3, zero batches is supported for this tensor. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the left, in the ‘width’ dimension. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the right, in the ‘width’ dimension. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the top, in the ‘height’ dimension. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the bottom, in the ‘height’ dimension. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘width’ dimension. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘height’ dimension. // * * 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter // * width. // * * 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter // * height. // * * 9: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * * 10: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since NNAPI feature level 3. // * // * Inputs (implicit padding): // * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying // * the input. // * Since NNAPI feature level 3, zero batches is supported for this tensor. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit // * padding scheme, has to be one of the // * {@link PaddingCode} values. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘width’ dimension. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘height’ dimension. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter // * width. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter // * height. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * * 7: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since NNAPI feature level 3. // * // * Outputs: // * * 0: The output 4-D tensor, of shape // * [batches, out_height, out_width, depth]. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 1. AVERAGE_POOL_2D = 1, // * // * Concatenates the input tensors along the given dimension. // * // * The input tensors must have identical {@link OperandCode} and the same // * dimensions except the dimension along the concatenation axis. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * (full support since NNAPI feature level 3, see the input section) // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0 ~ n-1: The list of n input tensors, of shape // * [D0, D1, ..., Daxis(i), ..., Dm]. // * Before NNAPI feature level 3, all input tensors of // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * must have the same scale and zeroPoint as the output tensor. // * Input tensors of // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} // * are allowed to have different scale and zeroPoint. // * Since NNAPI feature level 3, zero-sized tensors are supported. // * * n: An {@link ANEURALNETWORKS_INT32} scalar, specifying the // * concatenation axis. // * // * Outputs: // * * 0: The output, a tensor of the same {@link OperandCode} as the input // * tensors. The output shape is [D0, D1, ..., sum(Daxis(i)), ..., Dm]. // * Since NNAPI feature level 3, for a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor, // * the scale and zeroPoint values can be different from // * input tensors. Before NNAPI feature level 3 they have to be the same as for the // * input tensors. // * // * Available since NNAPI feature level 1. CONCATENATION = 2, // * // * Performs a 2-D convolution operation. // * // * The CONV_2D op sweeps a 2-D filter that can mix channels together over a // * batch of images, applying the filter to each window of each image of the // * appropriate size. // * // * The output dimensions are functions of the filter dimensions, stride, and // * padding. // * // * The values in the output tensor are computed as: // * // * output[b, i, j, channel] = // * sum_{di, dj, k} ( // * input[b, strides[1] * i + di, strides[2] * j + dj, k] * // * filter[channel, di, dj, k] // * ) + bias[channel] // * // * Supported tensor {@link OperandCode} configurations: // * * 32 bit floating point: // * * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} for input, filter, output, and bias. // * // * * Quantized: // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, filter, and output. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to // * * * input.scale * filter.scale). // * // * Available since NNAPI feature level 3: // * * 16 bit floating point: // * * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} for input, filter, output, and bias. // * // * * Quantized with symmetric per channel quantization for the filter: // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, and output. // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0, // * * * each value scaling is separate and equal to input.scale * filter.scales[channel]). // * // * Available since NNAPI feature level 4: // * * Quantized signed (since NNAPI feature level 4): // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, filter, and output. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to // * * * input.scale * filter.scale). // * // * * Quantized signed with filter symmetric per channel quantization // * (since NNAPI feature level 4): // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, and output. // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0, // * * * each value scaling is separate and equal to input.scale * filter.scales[channel]). // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * NCHW is supported since NNAPI feature level 3. // * // * Both explicit padding and implicit padding are supported. // * // * Inputs (explicit padding): // * * 0: A 4-D tensor, of shape [batches, height, width, depth_in], // * specifying the input. // * Since NNAPI feature level 3, zero batches is supported for this tensor. // * * 1: A 4-D tensor, of shape // * [depth_out, filter_height, filter_width, depth_in], specifying the // * filter. // * For tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} // * the channel dimension (ANeuralNetworksSymmPerChannelQuantParams::channelDim) // * must be set to 0. // * * 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input // * tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * or {@link ANEURALNETWORKS_TENSOR_FLOAT16} the bias must be of the same type. // * For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint // * of 0 and bias_scale == input_scale * filter_scale. // * For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}, // * the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 // * and bias_scale of 0. The actual scale of each value 'i' is equal to // * bias_scale[i] = input_scale * filter_scale[i]. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the left, in the ‘width’ dimension. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the right, in the ‘width’ dimension. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the top, in the ‘height’ dimension. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the bottom, in the ‘height’ dimension. // * * 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘width’ dimension. // * * 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘height’ dimension. // * * 9: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * * 10: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since NNAPI feature level 3. // * * 11: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation // * factor for width. Defaults to 1. If set to k > 1, there will be k-1 skipped // * cells between each filter element on width dimension. If this input is set, // * input 12 (dilation factor for height) must be specified as well. // * Available since NNAPI feature level 3. // * * 12: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation // * factor for height. Defaults to 1. If set to k > 1, there will be k-1 skipped // * cells between each filter element on height dimension. If this input is set, // * input 11 (dilation factor for width) must be specified as well. // * Available since NNAPI feature level 3. // * // * Inputs (implicit padding): // * * 0: A 4-D tensor, of shape [batches, height, width, depth_in], // * specifying the input. // * Since NNAPI feature level 3, zero batches is supported for this tensor. // * * 1: A 4-D tensor, of shape // * [depth_out, filter_height, filter_width, depth_in], specifying the // * filter. // * For tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} // * the channel dimension (ANeuralNetworksSymmPerChannelQuantParams::channelDim) // * must be set to 0. // * * 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input // * tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * or {@link ANEURALNETWORKS_TENSOR_FLOAT16} the bias must be of the same // * type. // * For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint // * of 0 and bias_scale == input_scale * filter_scale. // * For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}, // * the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 // * and bias_scale of 0. The actual scale of each value 'i' is equal to // * bias_scale[i] = input_scale * filter_scale[i]. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit // * padding scheme, has to be one of the // * {@link PaddingCode} values. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘width’ dimension. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘height’ dimension. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * * 7: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since NNAPI feature level 3. // * * 8: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation // * factor for width. Defaults to 1. If set to k > 1, there will be k-1 skipped // * cells between each filter element on width dimension. If this input is set, // * input 9 (dilation factor for height) must be specified as well. // * Available since NNAPI feature level 3. // * * 9: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation // * factor for height. Defaults to 1. If set to k > 1, there will be k-1 skipped // * cells between each filter element on height dimension. If this input is set, // * input 8 (dilation factor for width) must be specified as well. // * Available since NNAPI feature level 3. // * // * Outputs: // * * 0: The output 4-D tensor, of shape // * [batches, out_height, out_width, depth_out]. // * Before NNAPI feature level 3, for output tensor of // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, the following condition must // * be satisfied: output_scale > input_scale * filter_scale // * // * Available since NNAPI feature level 1. CONV_2D = 3, // * // * Performs a depthwise 2-D convolution operation. // * // * Given an input tensor of shape [batches, height, width, depth_in] and a // * filter tensor of shape [1, filter_height, filter_width, depth_out] // * containing depth_out convolutional filters of depth 1, DEPTHWISE_CONV // * applies a different filter to each input channel (expanding from 1 // * channel to channel_multiplier channels for each), then concatenates the // * results together. // * // * The output has depth_out = depth_in * depth_multiplier channels. // * The output dimensions are functions of the filter dimensions, stride, and // * padding. // * // * The values in the output tensor are computed as: // * // * output[b, i, j, k * channel_multiplier + q] = // * sum_{di, dj} ( // * input[b, strides[1] * i + di, strides[2] * j + dj, k] * // * filter[1, di, dj, k * channel_multiplier + q] // * ) + bias[k * channel_multiplier + q] // * // * Supported tensor {@link OperandCode} configurations: // * * 32 bit floating point: // * * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} for input, filter, output, and bias. // * // * * Quantized: // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, filter, and output. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to // * * * input.scale * filter.scale). // * // * Available since NNAPI feature level 3: // * * 16 bit floating point: // * * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} for input, filter, output, and bias. // * // * * Quantized with symmetric per channel quantization for the filter: // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, and output. // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0, // * * * each value scaling is separate and equal to input.scale * filter.scales[channel]). // * // * Available since NNAPI feature level 4: // * * Quantized signed (since NNAPI feature level 4): // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, filter, and output. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to // * * * input.scale * filter.scale). // * // * * Quantized signed with filter symmetric per channel quantization // * (since NNAPI feature level 4): // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, and output. // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0, // * * * each value scaling is separate and equal to input.scale * filter.scales[channel]). // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * NCHW is supported since NNAPI feature level 3. // * // * Both explicit padding and implicit padding are supported. // * // * Inputs (explicit padding): // * * 0: A 4-D tensor, of shape [batches, height, width, depth_in], // * specifying the input. // * * 1: A 4-D tensor, of shape [1, filter_height, filter_width, depth_out], // * specifying the filter. // * For tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} // * the channel dimension (ANeuralNetworksSymmPerChannelQuantParams::channelDim) // * must be set to 3. // * * 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input // * tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * or {@link ANEURALNETWORKS_TENSOR_FLOAT16} the bias must be of the same type. // * For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint // * of 0 and bias_scale == input_scale * filter_scale. // * For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}, // * the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 // * and bias_scale of 0. The actual scale of each value 'i' is equal to // * bias_scale[i] = input_scale * filter_scale[i]. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the left, in the ‘width’ dimension. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the right, in the ‘width’ dimension. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the top, in the ‘height’ dimension. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the bottom, in the ‘height’ dimension. // * * 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘width’ dimension. // * * 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘height’ dimension. // * * 9: An {@link ANEURALNETWORKS_INT32} scalar, specifying the depthwise // * multiplier. // * * 10: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * * 11: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since NNAPI feature level 3. // * * 12: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation // * factor for width. Defaults to 1. If set to k > 1, there will be k-1 skipped // * cells between each filter element on width dimension. If this input is set, // * input 13 (dilation factor for height) must be specified as well. // * Available since NNAPI feature level 3. // * * 13: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation // * factor for height. Defaults to 1. If set to k > 1, there will be k-1 skipped // * cells between each filter element on height dimension. If this input is set, // * input 12 (dilation factor for width) must be specified as well. // * Available since NNAPI feature level 3. // * // * Inputs (implicit padding): // * * 0: A 4-D tensor, of shape [batches, height, width, depth_in], // * specifying the input. // * * 1: A 4-D tensor, of shape [1, filter_height, filter_width, depth_out], // * specifying the filter. // * * 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input // * tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * or {@link ANEURALNETWORKS_TENSOR_FLOAT16} the bias must be of the same type. // * For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint // * of 0 and bias_scale == input_scale * filter_scale. // * For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}, // * the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 // * and bias_scale of 0. The actual scale of each value 'i' is equal to // * bias_scale[i] = input_scale * filter_scale[i]. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit // * padding scheme, has to be one of the // * {@link PaddingCode} values. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘width’ dimension. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘height’ dimension. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the depthwise // * multiplier. // * * 7: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * * 8: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since NNAPI feature level 3. // * * 9: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation // * factor for width. Defaults to 1. If set to k > 1, there will be k-1 skipped // * cells between each filter element on width dimension. If this input is set, // * input 10 (dilation factor for height) must be specified as well. // * Available since NNAPI feature level 3. // * * 10: An optional {@link ANEURALNETWORKS_INT32} scalar, specifying the dilation // * factor for height. Defaults to 1. If set to k > 1, there will be k-1 skipped // * cells between each filter element on height dimension. If this input is set, // * input 9 (dilation factor for width) must be specified as well. // * Available since NNAPI feature level 3. // * // * Outputs: // * * 0: The output 4-D tensor, of shape // * [batches, out_height, out_width, depth_out]. Before NNAPI feature level 3, for // * output tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, // * the following condition must be satisfied: // * output_scale > input_scale * filter_scale // * // * Available since NNAPI feature level 1. DEPTHWISE_CONV_2D = 4, // * // * Rearranges data from depth into blocks of spatial data. // * // * More specifically, this op outputs a copy of the input tensor where // * values from the depth dimension are moved in spatial blocks to the height // * and width dimensions. The value block_size indicates the input block size // * and how the data is moved. // * // * Chunks of data of size block_size * block_size from depth are rearranged // * into non-overlapping blocks of size block_size x block_size. // * // * The width of the output tensor is input_depth * block_size, whereas the // * height is input_height * block_size. The depth of the input tensor must // * be divisible by block_size * block_size // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * NCHW is supported since NNAPI feature level 3. // * // * Inputs: // * * 0: A 4-D tensor, of shape [batches, height, width, depth_in], // * specifying the input. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the block_size. // * block_size must be >=1 and block_size * block_size must be a divisor // * of the input depth. // * * 2: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since NNAPI feature level 3. // * // * Outputs: // * * 0: The output 4-D tensor, of shape [batch, height*block_size, // * width*block_size, depth/(block_size*block_size)]. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 1. DEPTH_TO_SPACE = 5, // * // * Dequantizes the input tensor. // * // * The formula is: // * // * output = (input - zeroPoint) * scale. // * // * Supported input tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported output tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32}. // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: A tensor. // * Since NNAPI feature level 3, this tensor may be zero-sized. // * // * Outputs: // * * 0: A tensor with the same shape as input0. // * // * Available since NNAPI feature level 1. DEQUANTIZE = 6, // * // * Looks up sub-tensors in the input tensor. // * // * This operator takes for input a tensor of values (Values) and // * a one-dimensional tensor of selection indices (Lookups). // * The output tensor is the concatenation of sub-tensors of Values as // * selected by Lookups. // * // * Think of Values as being sliced along its first dimension: // * The entries in Lookups select which slices are concatenated together // * to create the output tensor. // * // * For example, if Values has shape of [40, 200, 300] and // * Lookups has shape of [3], all three values found in Lookups are // * expected to be between 0 and 39. The resulting tensor must // * have shape of [3, 200, 300]. // * // * If a value in Lookups is out of bounds, the operation must fail // * and an error must be reported. // * // * Supported value tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 4) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported value tensor rank: from 2 // * // * Inputs: // * * 0: Lookups. A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. // * The values are indices into the first dimension of Values. // * * 1: Values. An n-D tensor, where n >= 2, from which sub-tensors are // * extracted. // * // * Output: // * * 0: A n-D tensor with the same rank and shape as the Values // * tensor, except for the first dimension which has the same size // * as Lookups' only dimension. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input1. // * // * Available since NNAPI feature level 1. EMBEDDING_LOOKUP = 7, // * // * Computes element-wise floor() on the input tensor. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: A tensor. // * // * Outputs: // * * 0: The output tensor, of the same {@link OperandCode} and dimensions as // * the input tensor. // * // * Available since NNAPI feature level 1. FLOOR = 8, // * // * Denotes a fully (densely) connected layer, which connects all elements // * in the input tensor with each element in the output tensor. // * // * This layer implements the operation: // * // * outputs = activation(inputs * weights’ + bias) // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4. // * // * Inputs: // * * 0: A tensor of at least rank 2, specifying the input. If rank is // * greater than 2, then it gets flattened to a 2-D Tensor. The // * (flattened) 2-D Tensor is reshaped (if necessary) to // * [batch_size, input_size], where "input_size" corresponds to the // * number of inputs to the layer, matching the second dimension of // * weights, and "batch_size" is calculated by dividing the number of // * elements by "input_size". // * Since NNAPI feature level 3, zero batch_size is supported for this tensor. // * * 1: A 2-D tensor, specifying the weights, of shape // * [num_units, input_size], where "num_units" corresponds to the number // * of output nodes. // * * 2: A 1-D tensor, of shape [num_units], specifying the bias. For input // * tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, the bias should // * also be of {@link ANEURALNETWORKS_TENSOR_FLOAT32}. // * For input tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, // * with zeroPoint of 0 and bias_scale == input_scale * filter_scale. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * // * Outputs: // * * 0: The output tensor, of shape [batch_size, num_units]. Before NNAPI feature level 3, for // * output tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, the following // * condition must be satisfied: output_scale > input_scale * filter_scale. // * // * Available since NNAPI feature level 1. FULLY_CONNECTED = 9, // * // * Looks up sub-tensors in the input tensor using a key-value map. // * // * This operator takes for input a tensor of values (Values), // * a one-dimensional tensor of selection values (Lookups) and // * a one-dimensional tensor that maps these values to Values // * indexes. The output tensor is the concatenation of sub-tensors of // * Values as selected by Lookups via Keys. // * // * Think of Values as being sliced along its outer-most dimension. // * The output is a concatenation of selected slices, with one slice // * for each entry of Lookups. The slice selected is the one at the // * same index as the Maps entry that matches the value in Lookups. // * // * For a hit, the corresponding sub-tensor of Values is included // * in the Output tensor. For a miss, the corresponding sub-tensor in // * Output must have zero values. // * // * For example, if Values has shape of [40, 200, 300], // * Keys should have a shape of [40]. If Lookups tensor has shape // * of [3], three slices are being concatenated, so the resulting tensor // * must have the shape of [3, 200, 300]. If the first entry in Lookups // * has the value 123456, that value must be located in Keys tensor. // * If the sixth entry of Keys contains 123456, the sixth slice of Values // * must be selected. If no entry in Keys has 123456, a slice of zeroes // * must be concatenated. // * // * Supported value tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * // * Supported value tensor rank: from 2 // * // * Inputs: // * * 0: Lookups. A 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor with // * shape [ k ]. // * * 1: Keys. A 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor with shape // * [ n ]; Keys and Values pair represent a map, i.e., the ith element // * in Keys (Keys[i]) is the key to select the ith sub-tensor in Values // * (Values[i]), where 0 <= i <= n-1. Keys tensor *MUST* be sorted in // * ascending order. // * * 2: Values. A tensor with shape of [ n, … ]; i.e., the first dimension // * must be n. // * // * Outputs: // * * 0: Output. A tensor with shape [ k …]. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor, // * the scale and zeroPoint must be the same as input2. // * * 1: Hits. A boolean tensor with shape [ k ] indicates whether the lookup // * hits (True) or not (False). // * Stored as {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} with offset 0 // * and scale 1.0f. // * A non-zero byte represents True, a hit. A zero indicates otherwise. // * // * Available since NNAPI feature level 1. HASHTABLE_LOOKUP = 10, // * // * Applies L2 normalization along the axis dimension. // * // * The values in the output tensor are computed as: // * // * output[batch, row, col, channel] = // * input[batch, row, col, channel] / // * sqrt(sum_{c} pow(input[batch, row, col, c], 2)) // * // * By default the axis dimension is the last dimension of the input tensor. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4 // * Tensors with rank less than 4 are only supported since NNAPI feature level 3. // * // * Inputs: // * * 0: An n-D tensor, specifying the tensor to be normalized. // * * 1: An optional {@link ANEURALNETWORKS_INT32} scalar, default to -1, // * specifying the dimension normalization would be performed on. // * Negative index is used to specify axis from the end (e.g. -1 for // * the last axis). Must be in the range [-n, n). // * Available since NNAPI feature level 3. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} and same shape as input0. // * For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, // * the scale must be 1.f / 128 and the zeroPoint must be 128. // * For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * the scale must be 1.f / 128 and the zeroPoint must be 0. // * // * NOTE: Before NNAPI feature level 4, if the elements along an axis are all zeros, // * the result is undefined. Since NNAPI feature level 4, if the elements along an axis // * are all zeros, the result is logical zero. // * // * Available since NNAPI feature level 1. L2_NORMALIZATION = 11, // * // * Performs an 2-D L2 pooling operation. // * // * The output dimensions are functions of the filter dimensions, stride, and // * padding. // * // * The values in the output tensor are computed as: // * // * output[b, i, j, c] = // * sqrt(sum_{di, dj} pow(input[b, strides[1] * i + di, strides[2] * j + dj, c], 2) / // * sum(1)) // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * NCHW is supported since NNAPI feature level 3. // * // * Both explicit padding and implicit padding are supported. // * // * Inputs (explicit padding): // * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying // * the input. // * Since NNAPI feature level 3, zero batches is supported for this tensor. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the left, in the ‘width’ dimension. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the right, in the ‘width’ dimension. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the top, in the ‘height’ dimension. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the bottom, in the ‘height’ dimension. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘width’ dimension. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘height’ dimension. // * * 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter // * width. // * * 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter // * height. // * * 9: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * * 10: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since NNAPI feature level 3. // * // * Inputs (implicit padding): // * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying // * the input. // * Since NNAPI feature level 3, zero batches is supported for this tensor. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit // * padding scheme, has to be one of the // * {@link PaddingCode} values. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘width’ dimension. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘height’ dimension. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter // * width. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter // * height. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * * 7: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since NNAPI feature level 3. // * // * Outputs: // * * 0: The output 4-D tensor, of shape // * [batches, out_height, out_width, depth]. // * // * Available since NNAPI feature level 1. L2_POOL_2D = 12, // * // * Applies Local Response Normalization along the depth dimension. // * // * The 4-D input tensor is treated as a 3-D array of 1-D vectors (along the // * last dimension), and each vector is normalized independently. Within a // * given vector, each component is divided by the weighted, squared sum of // * inputs within depth_radius. // * // * The output is calculated using this formula: // * // * sqr_sum[a, b, c, d] = sum( // * pow(input[a, b, c, d - depth_radius : d + depth_radius + 1], 2)) // * output = input / pow((bias + alpha * sqr_sum), beta) // * // * For input tensor with rank less than 4, independently normalizes each // * 1-D slice along specified dimension. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: up to 4 // * Tensors with rank less than 4 are only supported since NNAPI feature level 3. // * // * Inputs: // * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying // * the input. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the radius of // * the normalization window. // * * 2: A scalar, specifying the bias, must not be zero. // * For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the bias // * value must be of {@link ANEURALNETWORKS_FLOAT16}. // * For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, the bias // * value must be of {@link ANEURALNETWORKS_FLOAT32}. // * * 3: A scalar, specifying the scale factor, alpha. // * For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the // * alpha value must be of {@link ANEURALNETWORKS_FLOAT16}. // * For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, the // * alpha value must be of {@link ANEURALNETWORKS_FLOAT32}. // * * 4: A scalar, specifying the exponent, beta. // * For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the beta // * value must be of {@link ANEURALNETWORKS_FLOAT16}. // * For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, the beta // * value must be of {@link ANEURALNETWORKS_FLOAT32}. // * * 5: An optional {@link ANEURALNETWORKS_INT32} scalar, default to -1, // * specifying the dimension normalization would be performed on. // * Negative index is used to specify axis from the end (e.g. -1 for // * the last axis). Must be in the range [-n, n). // * Available since NNAPI feature level 3. // * // * Outputs: // * * 0: The output tensor of same shape as input0. // * // * Available since NNAPI feature level 1. LOCAL_RESPONSE_NORMALIZATION = 13, // * // * Computes sigmoid activation on the input tensor element-wise. // * // * The output is calculated using this formula: // * // * output = 1 / (1 + exp(-input)) // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4. // * // * Inputs: // * * 0: A tensor, specifying the input. // * Since NNAPI feature level 3, this tensor may be zero-sized. // * // * Outputs: // * * 0: The output tensor of same shape as input0. // * For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, // * the scale must be 1.f / 256 and the zeroPoint must be 0. // * For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * the scale must be 1.f / 256 and the zeroPoint must be -128. // * // * Available since NNAPI feature level 1. LOGISTIC = 14, // * // * Projects an input to a bit vector via locality senstive hashing. // * // * Supported input tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * // * Supported input tensor rank: from 1 // * // * Inputs: // * * 0: Hash functions. Dim.size == 2, DataType: Float. // * Tensor[0].Dim[0]: Number of hash functions. // * Tensor[0].Dim[1]: Number of projected output bits generated by each // * hash function. // * If the projection type is Sparse: // * Tensor[0].Dim[1] + ceil(log2(Tensor[0].Dim[0])) <= 32 // * // * * 1: Input. Dim.size >= 1, no restriction on DataType. // * * 2: Weight. Optional. Dim.size == 1, DataType: Float. // * If not set, each input element is considered to have the same weight // * of 1.0. // * Tensor[1].Dim[0] == Tensor[2].Dim[0] // * * 3: Type: // * Sparse: // * Value LSHProjectionType_SPARSE(=3) (since NNAPI feature level 3). // * Computed bit vector is considered to be sparse. // * Each output element is an int32 made up of multiple bits // * computed from hash functions. // * // * NOTE: To avoid collisions across hash functions, an offset value // * of k * (1 << Tensor[0].Dim[1]) will be added to each signature, // * where k is the index of the hash function. // * // * Value LSHProjectionType_SPARSE_DEPRECATED(=1). // * Legacy behavior that does not include the offset value. // * // * Dense: // * Value LSHProjectionType_DENSE(=2). // * Computed bit vector is considered to be dense. Each output // * element represents a bit and can take the value of either // * 0 or 1. // * // * Outputs: // * * 0: If the projection type is Sparse: // * Output.Dim == { Tensor[0].Dim[0] } // * A tensor of int32 that represents hash signatures. // * // * If the projection type is Dense: // * Output.Dim == { Tensor[0].Dim[0] * Tensor[0].Dim[1] } // * A flattened tensor that represents projected bit vectors. // * // * Available since NNAPI feature level 1. // * The offset value for sparse projections was added in NNAPI feature level 3. LSH_PROJECTION = 15, // * // * Performs a single time step in a Long Short-Term Memory (LSTM) layer // * // * The LSTM operation is described by the following equations. // * // * \f{eqnarray*}{ // * i_t =& \sigma(W_{xi}x_t+W_{hi}h_{t-1}+W_{ci}C_{t-1}+b_i) & \\ // * f_t =& \sigma(W_{xf}x_t+W_{hf}h_{t-1}+W_{cf}C_{t-1}+b_f) & \\ // * C_t =& clip(f_t \odot C_{t-1} + i_t \odot // * g(W_{xc}x_t+W_{hc}h_{t-1}+b_c),\ t_{cell}) & \\ // * o_t =& \sigma(W_{xo}x_t+W_{ho}h_{t-1}+W_{co}C_t+b_o) & \\ // * & & \\ // * & clip(W_{proj}(o_t \odot g(C_t))+b_{proj},\ t_{proj}) // * & if\ there\ is\ a\ projection; \\ // * h_t =& & \\ // * & o_t \odot g(C_t) & otherwise. \\ // * \f} // * Where: // * * \f$x_t\f$ is the input, // * * \f$i_t\f$ is the input gate, // * * \f$f_t\f$ is the forget gate, // * * \f$C_t\f$ is the cell state, // * * \f$o_t\f$ is the output, // * * \f$h_t\f$ is the output state, // * * \f$\sigma\f$ is the logistic sigmoid function, // * * \f$g\f$ is the cell input and cell output activation function, usually // * \f$tahn\f$, // * * \f$W_{xi}\f$ is the input-to-input weight matrix, // * * \f$W_{hi}\f$ is the recurrent to input weight matrix, // * * \f$W_{ci}\f$ is the cell-to-input weight matrix, // * * \f$b_i\f$ is the input gate bias, // * * \f$W_{xf}\f$ is the input-to-forget weight matrix, // * * \f$W_{hf}\f$ is the recurrent-to-forget weight matrix, // * * \f$W_{cf}\f$ is the cell-to-forget weight matrix, // * * \f$b_f\f$ is the forget gate bias, // * * \f$W_{xc}\f$ is the input-to-cell weight matrix, // * * \f$W_{hc}\f$ is the recurrent-to-cell weight matrix, // * * \f$b_c\f$ is the cell bias, // * * \f$W_{xo}\f$ is the input-to-output weight matrix, // * * \f$W_{ho}\f$ is the recurrent-to-output weight matrix, // * * \f$W_{co}\f$ is the cell-to-output weight matrix, // * * \f$b_o\f$ is the output gate bias, // * * \f$W_{proj}\f$ is the projection weight matrix, // * * \f$b_{proj}\f$ is the projection bias, // * * \f$t_{cell}\f$ is the threshold for clipping the cell state, and // * * \f$t_{proj}\f$ is the threshold for clipping the projected output. // * * \f$\odot\f$ is the // * <a href="https://en.wikipedia.org/wiki/Hadamard_product_(matrices)"> // * Hadamard product</a> that takes two matrices and produces another // * matrix, each element of which is the product of the corresponding // * elements of the input matrices. // * // * Since NNAPI feature level 3 LSTM supports layer normalization. // * In case layer normalization is used, the inputs to internal activation // * functions (sigmoid and \f$g\f$) are normalized, rescaled and recentered // * following an approach from section 3.1 from // * https://arxiv.org/pdf/1607.06450.pdf // * // * The operation has the following independently optional inputs: // * * The cell-to-input weights (\f$W_{ci}\f$), cell-to-forget weights // * (\f$W_{cf}\f$) and cell-to-output weights (\f$W_{co}\f$) either all // * have values or neither of them have values (i.e., all set to null). If // * they have values, the peephole optimization is used. // * * The input-to-input weights (\f$W_{xi}\f$), recurrent-to-input weights // * (\f$W_{hi}\f$) and input gate bias (\f$b_i\f$) either all have values, // * or none of them have values. If they have no values, coupling of input // * and forget gates (CIFG) is used, in which case the input gate // * (\f$i_t\f$) is calculated using the following equation instead. // * \f{eqnarray*}{ // * i_t = 1 - f_t // * \f} // * In case peephole optimization is used and CIFG is not used // * cell-to-input (\f$W_{ci}\f$) weights must be present. Otherwise, the // * cell-to-input weights must have no value. // * * The projection weights (\f$W_{proj}\f$) is required only for the // * recurrent projection layer, and should otherwise have no value. // * * The projection bias (\f$b_{proj}\f$) may (but not required to) have a // * value if the recurrent projection layer exists, and should otherwise // * have no value. // * * (NNAPI feature level 3 or later) The four layer normalization weights either all have // * values or none of them have values. Additionally, if CIFG is used, // * input layer normalization weights tensor is omitted and the other layer // * normalization weights either all have values or none of them have // * values. Layer normalization is used when the values of all the layer // * normalization weights are present. // * // * References: // * // * The default non-peephole non-CIFG implementation is based on: // * http://www.bioinf.jku.at/publications/older/2604.pdf // * S. Hochreiter and J. Schmidhuber. "Long Short-Term Memory". Neural // * Computation, 9(8):1735-1780, 1997. // * // * The peephole implementation and projection layer is based on: // * https://research.google.com/pubs/archive/43905.pdf // * Hasim Sak, Andrew Senior, and Francoise Beaufays. "Long short-term memory // * recurrent neural network architectures for large scale acoustic // * modeling." INTERSPEECH, 2014. // * (However, the concept of peephole optimization was introduced in work // * prior to this paper.) // * // * The coupling of input and forget gate (CIFG) is based on: // * http://arxiv.org/pdf/1503.04069.pdf // * Greff et al. "LSTM: A Search Space Odyssey" // * // * The layer normalization is based on: // * https://arxiv.org/pdf/1607.06450.pdf // * Jimmy Ba et al. "Layer Normalization" // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * All input and output tensors must be of the same type. // * // * Inputs: // * * 0: The input (\f$x_t\f$). // * A 2-D tensor of shape [batch_size, input_size], where “batch_size” // * corresponds to the batching dimension, and “input_size” is the size // * of the input. // * * 1: The input-to-input weights (\f$W_{xi}\f$). Optional. // * A 2-D tensor of shape [num_units, input_size], where “num_units” // * corresponds to the number of cell units. // * * 2: The input-to-forget weights (\f$W_{xf}\f$). // * A 2-D tensor of shape [num_units, input_size]. // * * 3: The input-to-cell weights (\f$W_{xc}\f$). // * A 2-D tensor of shape [num_units, input_size]. // * * 4: The input-to-output weights (\f$W_{xo}\f$). // * A 2-D tensor of shape [num_units, input_size]. // * * 5: The recurrent-to-input weights (\f$W_{hi}\f$). Optional. // * A 2-D tensor of shape [num_units, output_size], where “output_size” // * corresponds to either the number of cell units (i.e., “num_units”), // * or the second dimension of the “projection_weights”, if defined. // * * 6: The recurrent-to-forget weights (\f$W_{hf}\f$). // * A 2-D tensor of shape [num_units, output_size]. // * * 7: The recurrent-to-cell weights (\f$W_{hc}\f$). // * A 2-D tensor of shape [num_units, output_size]. // * * 8: The recurrent-to-output weights (\f$W_{ho}\f$). // * A 2-D tensor of shape [num_units, output_size]. // * * 9: The cell-to-input weights (\f$W_{ci}\f$). Optional. // * A 1-D tensor of shape [num_units]. // * * 10:The cell-to-forget weights (\f$W_{cf}\f$). Optional. // * A 1-D tensor of shape [num_units]. // * * 11:The cell-to-output weights (\f$W_{co}\f$). Optional. // * A 1-D tensor of shape [num_units]. // * * 12:The input gate bias (\f$b_i\f$). Optional. // * A 1-D tensor of shape [num_units]. // * * 13:The forget gate bias (\f$b_f\f$). // * A 1-D tensor of shape [num_units]. // * * 14:The cell bias (\f$b_c\f$). // * A 1-D tensor of shape [num_units]. // * * 15:The output gate bias (\f$b_o\f$). // * A 1-D tensor of shape [num_units]. // * * 16:The projection weights (\f$W_{proj}\f$). Optional. // * A 2-D tensor of shape [output_size, num_units]. // * * 17:The projection bias (\f$b_{proj}\f$). Optional. // * A 1-D tensor of shape [output_size]. // * * 18:The output state (in) (\f$h_{t-1}\f$). // * A 2-D tensor of shape [batch_size, output_size]. // * * 19:The cell state (in) (\f$C_{t-1}\f$). // * A 2-D tensor of shape [batch_size, num_units]. // * * 20:The activation function (\f$g\f$). // * A value indicating the activation function: // * <ul> // * <li>0: None; // * <li>1: Relu; // * <li>3: Relu6; // * <li>4: Tanh; // * <li>6: Sigmoid. // * </ul> // * * 21:The clipping threshold (\f$t_{cell}\f$) for the cell state, such // * that values are bound within [-cell_clip, cell_clip]. If set to 0.0 // * then clipping is disabled. // * Until NNAPI feature level 3 this scalar must be of type {@link // * ANEURALNETWORKS_FLOAT32}. Since NNAPI feature level 3, if all the input // * tensors have type {@link ANEURALNETWORKS_TENSOR_FLOAT32}, this // * scalar must be of the type {@link ANEURALNETWORKS_FLOAT32}, // * otherwise if all the input tensors have the type {@link // * ANEURALNETWORKS_TENSOR_FLOAT16}, this scalar must be of type {@link // * ANEURALNETWORKS_FLOAT16}. // * * 22:The clipping threshold (\f$t_{proj}\f$) for the output from the // * projection layer, such that values are bound within // * [-proj_clip, proj_clip]. If set to 0.0 then clipping is disabled. // * Until NNAPI feature level 3 this scalar must be of type {@link // * ANEURALNETWORKS_FLOAT32}. Since NNAPI feature level 3, if all the input // * tensors have type {@link ANEURALNETWORKS_TENSOR_FLOAT32}, this // * scalar must be of the type {@link ANEURALNETWORKS_FLOAT32}, // * otherwise if all the input tensors have the type {@link // * ANEURALNETWORKS_TENSOR_FLOAT16}, this scalar must be of type {@link // * ANEURALNETWORKS_FLOAT16}. // * Since NNAPI feature level 3 there are additional inputs to this op: // * * 23:The input layer normalization weights. // * A 1-D tensor of shape [num_units]. Used to rescale normalized inputs // * to activation at input gate. // * * 24:The forget layer normalization weights. // * A 1-D tensor of shape [num_units]. Used to rescale normalized inputs // * to activation at forget gate. // * * 25:The cell layer normalization weights. // * A 1-D tensor of shape [num_units]. Used to rescale normalized inputs // * to activation at cell gate. // * * 26:The output layer normalization weights. // * A 1-D tensor of shape [num_units]. Used to rescale normalized inputs // * to activation at output gate. // * // * Outputs: // * * 0: The scratch buffer. // * A 2-D tensor of shape [batch_size, num_units * 3] with CIFG, or // * [batch_size, num_units * 4] without CIFG. // * * 1: The output state (out) (\f$h_t\f$). // * A 2-D tensor of shape [batch_size, output_size]. // * * 2: The cell state (out) (\f$C_t\f$). // * A 2-D tensor of shape [batch_size, num_units]. // * * 3: The output (\f$o_t\f$). // * A 2-D tensor of shape [batch_size, output_size]. This is effectively // * the same as the current “output state (out)” value. // * // * Available since NNAPI feature level 1. LSTM = 16, // * // * Performs an 2-D max pooling operation. // * // * The output dimensions are functions of the filter dimensions, stride, and // * padding. // * // * The values in the output tensor are computed as: // * // * output[b, i, j, channel] = // * max_{di, dj} ( // * input[b, strides[1] * i + di, strides[2] * j + dj, channel] // * ) // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * NCHW is supported since NNAPI feature level 3. // * // * Both explicit padding and implicit padding are supported. // * // * Inputs (explicit padding): // * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying // * the input. // * Since NNAPI feature level 3, zero batches is supported for this tensor. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the left, in the ‘width’ dimension. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the right, in the ‘width’ dimension. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the top, in the ‘height’ dimension. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the bottom, in the ‘height’ dimension. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘width’ dimension. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘height’ dimension. // * * 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter // * width. // * * 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter // * height. // * * 9: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * * 10: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since NNAPI feature level 3. // * // * Inputs (implicit padding): // * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying // * the input. // * Since NNAPI feature level 3, zero batches is supported for this tensor. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit // * padding scheme, has to be one of the // * {@link PaddingCode} values. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘width’ dimension. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘height’ dimension. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter // * width. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter // * height. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * * 7: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since NNAPI feature level 3. // * // * Outputs: // * * 0: The output 4-D tensor, of shape // * [batches, out_height, out_width, depth]. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 1. MAX_POOL_2D = 17, // * // * Multiplies two tensors, element-wise. // * // * Takes two input tensors of identical {@link OperandCode} and compatible // * dimensions. The output is the product of both input tensors, optionally // * modified by an activation function. // * // * Two dimensions are compatible when: // * 1. they are equal, or // * 2. one of them is 1 // * // * The size of the resulting output is the maximum size along each dimension // * of the input operands. It starts with the trailing dimensions, and works // * its way forward. // * // * Since NNAPI feature level 3, generic zero-sized input tensor is supported. Zero // * dimension is only compatible with 0 or 1. The size of the output // * dimension is zero if either of corresponding input dimension is zero. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * * {@link ANEURALNETWORKS_TENSOR_INT32} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: A tensor. // * * 1: A tensor of the same {@link OperandCode}, and compatible dimensions // * as input0. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * For a {@link ANEURALNETWORKS_TENSOR_INT32} tensor, // * the {@link FuseCode} must be "NONE". // * // * Outputs: // * * 0: The product, a tensor of the same {@link OperandCode} as input0. // * For output tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * the following condition must be satisfied: // * output_scale > input1_scale * input2_scale. // * // * Available since NNAPI feature level 1. MUL = 18, // * // * Computes rectified linear activation on the input tensor element-wise. // * // * The output is calculated using this formula: // * // * output = max(0, input) // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4. // * // * Inputs: // * * 0: A tensor, specifying the input. // * Since NNAPI feature level 3, this tensor may be zero-sized. // * // * Outputs: // * * 0: The output tensor of same shape as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 1. RELU = 19, // * // * Computes rectified linear 1 activation on the input tensor element-wise. // * // * The output is calculated using this formula: // * // * output = min(1.f, max(-1.f, input)) // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4. // * // * Inputs: // * * 0: A tensor, specifying the input. // * Since NNAPI feature level 3, this tensor may be zero-sized. // * // * Outputs: // * * 0: The output tensor of the same shape as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 1. RELU1 = 20, // * // * Computes rectified linear 6 activation on the input tensor element-wise. // * // * The output is calculated using this formula: // * // * output = min(6, max(0, input)) // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4. // * // * Inputs: // * * 0: A tensor, specifying the input. // * Since NNAPI feature level 3, this tensor may be zero-sized. // * // * Outputs: // * * 0: The output tensor of same shape as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 1. RELU6 = 21, // * // * Reshapes a tensor. // * // * Given tensor, this operation returns a tensor that has the same values as // * tensor, but with a newly specified shape. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * * {@link ANEURALNETWORKS_TENSOR_INT32} (since NNAPI feature level 6) // * // * Supported tensor rank: up to 4. // * // * Inputs: // * * 0: A tensor, specifying the tensor to be reshaped. // * * 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, defining the // * shape of the output tensor. The number of elements implied by shape // * must be the same as the number of elements in the input tensor. // * // * If one component of shape is the special value -1, the size of that // * dimension is computed so that the total size remains constant. In // * particular, a shape of [-1] flattens into 1-D. At most one component // * of shape can be -1. // * // * Outputs: // * * 0: The output tensor, of shape specified by the input shape. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 1. RESHAPE = 22, // * // * Resizes images to given size using the bilinear interpretation. // * // * Resized images must be distorted if their output aspect ratio is not the // * same as input aspect ratio. The corner pixels of output may not be the // * same as corner pixels of input. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * NCHW is supported since NNAPI feature level 3. // * // * Both resizing by shape and resizing by scale are supported. // * // * Inputs (resizing by shape): // * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying // * the input. // * Since NNAPI feature level 3, zero batches is supported for this tensor. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output // * width of the output tensor. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output // * height of the output tensor. // * * 3: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since NNAPI feature level 3. // * * 4: Align corners. An optional {@link ANEURALNETWORKS_BOOL} // * scalar, default to false. If True, the centers of the 4 corner // * pixels of the input and output tensors are aligned, preserving the // * values at the corner pixels. // * Available since NNAPI feature level 4. // * * 5: Half pixel centers. An optional {@link ANEURALNETWORKS_BOOL} // * scalar, default to false. If True, the pixel centers are assumed to // * be at (0.5, 0.5). This is the default behavior of image.resize in // * TF 2.0. If this parameter is True, then align_corners parameter // * must be False. // * Available since NNAPI feature level 4. // * // * Inputs (resizing by scale, since NNAPI feature level 3): // * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying // * the input. Zero batches is supported for this tensor. // * * 1: A scalar, specifying width_scale, the scaling factor of the width // * dimension from the input tensor to the output tensor. The output // * width is calculated as new_width = floor(width * width_scale). // * The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is // * of {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of // * {@link ANEURALNETWORKS_FLOAT32} otherwise. // * * 2: A scalar, specifying height_scale, the scaling factor of the height // * dimension from the input tensor to the output tensor. The output // * height is calculated as new_height = floor(height * height_scale). // * The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is // * of {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of // * {@link ANEURALNETWORKS_FLOAT32} otherwise. // * * 3: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * * 4: Align corners. An optional {@link ANEURALNETWORKS_BOOL} // * scalar, default to false. If True, the centers of the 4 corner // * pixels of the input and output tensors are aligned, preserving the // * values at the corner pixels. // * Available since NNAPI feature level 4. // * * 5: Half pixel centers. An optional {@link ANEURALNETWORKS_BOOL} // * scalar, default to false. If True, the pixel centers are assumed to // * be at (0.5, 0.5). This is the default behavior of image.resize in // * TF 2.0. If this parameter is True, then align_corners parameter // * must be False. // * Available since NNAPI feature level 4. // * // * Outputs: // * * 0: The output 4-D tensor, of shape // * [batches, new_height, new_width, depth]. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 1. RESIZE_BILINEAR = 23, // * // * A basic recurrent neural network layer. // * // * This layer implements the operation: // * outputs = state = activation(inputs * input_weights + // * state * recurrent_weights + bias) // * // * Where: // * * “input_weights” is a weight matrix that multiplies the inputs; // * * “recurrent_weights” is a weight matrix that multiplies the current // * “state” which itself is the output from the previous time step // * computation; // * * “bias” is a bias vector (added to each output vector in the batch); // * * “activation” is the function passed as the “fused_activation_function” // * argument (if not “NONE”). // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * The input tensors must all be the same type. // * // * Inputs: // * * 0: input. // * A 2-D tensor of shape [batch_size, input_size], where “batch_size” // * corresponds to the batching dimension, and “input_size” is the size // * of the input. // * * 1: weights. // * A 2-D tensor of shape [num_units, input_size], where “num_units” // * corresponds to the number of units. // * * 2: recurrent_weights. // * A 2-D tensor of shape [num_units, num_units], with columns // * corresponding to the weights from each unit. // * * 3: bias. // * A 1-D tensor of shape [num_units]. // * * 4: hidden state (in). // * A 2-D tensor of shape [batch_size, num_units]. // * * 5: fused_activation_function. // * An optional {@link FuseCode} value indicating the // * activation function. If “NONE” is specified then it results in a // * linear activation. // * // * Outputs: // * * 0: hidden state (out). // * A 2-D tensor of shape [batch_size, num_units]. // * // * * 1: output. // * A 2-D tensor of shape [batch_size, num_units]. This is effectively // * the same as the current state value. // * // * Available since NNAPI feature level 1. RNN = 24, // * // * Computes the softmax activation on the input tensor element-wise, per // * batch, by normalizing the input vector so the maximum coefficient is // * zero. // * // * The output is calculated using this formula: // * // * output[batch, i] = // * exp((input[batch, i] - max(input[batch, :])) * beta) / // * sum_{k}{exp((input[batch, k] - max(input[batch, :])) * beta)} // * // * For input tensor with rank other than 2, the activation will be applied // * independently on each 1-D slice along specified dimension. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4. // * Tensors with rank other than 2 or 4 are only supported since NNAPI feature level 3. // * // * Inputs: // * * 0: A 2-D or 4-D tensor, specifying the tensor to be reshaped. // * Since NNAPI feature level 3, this tensor may be zero-sized. // * * 1: A scalar, specifying the positive scaling factor for the exponent, // * beta. If input0 is of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} or // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, the scalar // * must be of {@link ANEURALNETWORKS_FLOAT32}. // * If input0 is of {@link ANEURALNETWORKS_TENSOR_FLOAT16}, then the // * scalar must be of {@link ANEURALNETWORKS_FLOAT16}. // * * 2: An optional {@link ANEURALNETWORKS_INT32} scalar, default to -1, // * specifying the dimension the activation would be performed on. // * Negative index is used to specify axis from the end (e.g. -1 for // * the last axis). Must be in the range [-n, n). // * Available since NNAPI feature level 3. // * // * Outputs: // * * 0: The output tensor of same shape as input0. // * For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, // * the scale must be 1.f / 256 and the zeroPoint must be 0. // * For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * the scale must be 1.f / 256 and the zeroPoint must be -128. // * // * Available since NNAPI feature level 1. SOFTMAX = 25, // * // * Rearranges blocks of spatial data, into depth. // * // * More specifically, this op outputs a copy of the input tensor where // * values from the height and width dimensions are moved to the depth // * dimension. The value block_size indicates the input block size and how // * the data is moved. // * // * Chunks of data of size block_size * block_size from depth are rearranged // * into non-overlapping blocks of size block_size x block_size. // * // * The depth of the output tensor is input_depth * block_size * block_size. // * The input tensor's height and width must be divisible by block_size. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * NCHW is supported since NNAPI feature level 3. // * // * Inputs: // * * 0: A 4-D tensor, of shape [batches, height, width, depth_in], // * specifying the input. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the block_size. // * block_size must be >=1 and block_size must be a divisor of both the // * input height and width. // * * 2: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since NNAPI feature level 3. // * // * Outputs: // * * 0: The output 4-D tensor, of shape [batches, height/block_size, // * width/block_size, depth_in*block_size*block_size]. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 1. SPACE_TO_DEPTH = 26, // * // * SVDF op is a kind of stateful layer derived from the notion that a // * densely connected layer that's processing a sequence of input frames can // * be approximated by using a singular value decomposition of each of its // * nodes. The implementation is based on: // * // * https://research.google.com/pubs/archive/43813.pdf // * // * P. Nakkiran, R. Alvarez, R. Prabhavalkar, C. Parada. // * “Compressing Deep Neural Networks using a Rank-Constrained Topology”. // * INTERSPEECH, 2015. // * // * It processes the incoming input using a 2-stage filtering mechanism: // * * stage 1 performs filtering on the "features" dimension, whose outputs // * get pushed into a memory of fixed-size memory_size. // * * stage 2 performs filtering on the "time" dimension of the memory_size // * memoized outputs of stage 1. // * // * Specifically, for rank 1, this layer implements the operation: // * // * memory = push(conv1d(inputs, weights_feature, feature_dim, // * "ANEURALNETWORKS_PADDING_VALID")); // * outputs = activation(memory * weights_time + bias); // * // * Where: // * * “weights_feature” is a weights matrix that processes the inputs (by // * convolving the input with every “feature filter”), and whose outputs // * get pushed, stacked in order, into the fixed-size “memory” (the oldest // * entry gets dropped); // * * “weights_time” is a weights matrix that processes the “memory” (by a // * batched matrix multiplication on the num_units); // * * “bias” is an optional bias vector (added to each output vector in the // * batch); and // * * “activation” is the function passed as the “fused_activation_function” // * argument (if not “NONE”). // * // * Each rank adds a dimension to the weights matrices by means of stacking // * the filters. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * All input tensors must be the same type. // * // * Inputs: // * * 0: input. // * A 2-D tensor of shape [batch_size, input_size], where “batch_size” // * corresponds to the batching dimension, and “input_size” is the size // * of the input. // * * 1: weights_feature. // * A 2-D tensor of shape [num_units, input_size], where “num_units” // * corresponds to the number of units. // * * 2: weights_time. // * A 2-D tensor of shape [num_units, memory_size], where “memory_size” // * corresponds to the fixed-size of the memory. // * * 3: bias. // * An optional 1-D tensor of shape [num_units]. // * * 4: state (in). // * A 2-D tensor of shape [batch_size, (memory_size - 1) * num_units * rank]. // * * 5: rank. // * The rank of the SVD approximation. // * * 6: fused_activation_function. // * An optional {@link FuseCode} value indicating the // * activation function. If “NONE” is specified then it results in a // * linear activation. // * // * Outputs: // * * 0: state (out). // * A 2-D tensor of the same {@link OperandCode} as the inputs, with shape // * [batch_size, (memory_size - 1) * num_units * rank]. // * * 1: output. // * A 2-D tensor of the same {@link OperandCode} as the inputs, with shape // * [batch_size, num_units]. // * // * Available since NNAPI feature level 1. SVDF = 27, // * // * Computes hyperbolic tangent of input tensor element-wise. // * // * The output is calculated using this formula: // * // * output = tanh(input) // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4. // * // * Inputs: // * * 0: A tensor, specifying the input. // * Since NNAPI feature level 3, this tensor may be zero-sized. // * // * Outputs: // * * 0: The output tensor of same shape as input0. // * For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, // * the scale must be 1.f / 128 and the zeroPoint must be 128. // * For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * the scale must be 1.f / 128 and the zeroPoint must be 0. // * // * Available since NNAPI feature level 1. TANH = 28, // * // * BatchToSpace for N-dimensional tensors. // * // * This operation reshapes the batch dimension (dimension 0) into M + 1 // * dimensions of shape block_shape + [batch], interleaves these blocks back // * into the grid defined by the spatial dimensions [1, ..., M], to obtain a // * result with the same rank as the input. // * // * This is the reverse of SpaceToBatch. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * NCHW is supported since NNAPI feature level 3. // * // * Inputs: // * * 0: An n-D tensor, specifying the tensor to be reshaped // * * 1: A 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the block // * sizes for each spatial dimension of the input tensor. All values // * must be >= 1. // * * 2: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since API level 29. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 2. BATCH_TO_SPACE_ND = 29, // * // * Element-wise division of two tensors. // * // * Takes two input tensors of identical {@link OperandCode} and compatible // * dimensions. The output is the result of dividing the first input tensor // * by the second, optionally modified by an activation function. // * // * For inputs of {@link ANEURALNETWORKS_TENSOR_INT32}, performs // * "floor division" ("//" in Python). For example, // * 5 // 2 = 2 // * -5 // 2 = -3 // * // * Two dimensions are compatible when: // * 1. they are equal, or // * 2. one of them is 1 // * // * The size of the output is the maximum size along each dimension of the // * input operands. It starts with the trailing dimensions, and works its way // * forward. // * // * Example: // * input1.dimension = {4, 1, 2} // * input2.dimension = {5, 4, 3, 1} // * output.dimension = {5, 4, 3, 2} // * // * Since NNAPI feature level 3, generic zero-sized input tensor is supported. Zero // * dimension is only compatible with 0 or 1. The size of the output // * dimension is zero if either of corresponding input dimension is zero. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: An n-D tensor, specifying the first input. // * * 1: A tensor of the same {@link OperandCode}, and compatible dimensions // * as input0. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * For a {@link ANEURALNETWORKS_TENSOR_INT32} tensor, // * the {@link FuseCode} must be "NONE". // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * // * Available since NNAPI feature level 2. DIV = 30, // * // * Computes the mean of elements across dimensions of a tensor. // * // * Reduces the input tensor along the given dimensions to reduce. Unless // * keep_dims is true, the rank of the tensor is reduced by 1 for each entry // * in axis. If keep_dims is true, the reduced dimensions are retained with // * length 1. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: A tensor, specifying the input. // * * 1: A 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions // * to reduce. Must be in the range // * [-rank(input_tensor), rank(input_tensor)). // * // * NOTE: When the operation was introduced, the documentation // * incorrectly stated that if dimensions were empty, the operation // * would reduce across all dimensions. This behavior was never // * implemented. // * // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, keep_dims. If positive, // * retains reduced dimensions with length 1. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * If all dimensions are reduced and keep_dims is false, the output // * shape is [1]. // * // * Available since NNAPI feature level 2. MEAN = 31, // * // * Pads a tensor. // * // * This operation pads a tensor according to the specified paddings. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * (full support since NNAPI feature level 3, see the output section) // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: An n-D tensor, specifying the tensor to be padded. // * * 1: A 2-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the paddings // * for each spatial dimension of the input tensor. The shape of the // * tensor must be {rank(input0), 2}. // * padding[i, 0] specifies the number of elements to be padded in the // * front of dimension i. // * padding[i, 1] specifies the number of elements to be padded after the // * end of dimension i. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. The // * output tensor has the same rank as input0, and each // * dimension of the output tensor has the same size as the // * corresponding dimension of the input tensor plus the size // * of the padding: // * output0.dimension[i] = // * padding[i, 0] + input0.dimension[i] + padding[i, 1] // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * NOTE: Before NNAPI feature level 3, the pad value for // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} is undefined. // * Since NNAPI feature level 3, the pad value is always the logical zero. // * // * Available since NNAPI feature level 2. PAD = 32, // * // * SpaceToBatch for N-Dimensional tensors. // * // * This operation divides "spatial" dimensions [1, ..., M] of the input into // * a grid of blocks of shape block_shape, and interleaves these blocks with // * the "batch" dimension (0) such that in the output, the spatial dimensions // * [1, ..., M] correspond to the position within the grid, and the batch // * dimension combines both the position within a spatial block and the // * original batch position. Prior to division into blocks, the spatial // * dimensions of the input are optionally zero padded according to paddings. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * (full support since NNAPI feature level 3, see the output section) // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * NCHW is supported since NNAPI feature level 3. // * // * Inputs: // * * 0: An n-D tensor, specifying the input. // * * 1: A 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the block // * sizes for each spatial dimension of the input tensor. All values // * must be >= 1. // * * 2: A 2-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the paddings // * for each spatial dimension of the input tensor. All values must be // * >= 0. The shape of the tensor must be {M, 2}, where M is the number // * of spatial dimensions. // * padding[i, 0] specifies the number of element to be padded in the // * front of dimension i. // * padding[i, 1] specifies the number of element to be padded after the // * end of dimension i. // * * 3: An optional {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * Available since NNAPI feature level 3. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * NOTE: Before NNAPI feature level 3, the pad value for // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} is undefined. // * Since NNAPI feature level 3, the pad value is always the logical zero. // * // * Available since NNAPI feature level 2. SPACE_TO_BATCH_ND = 33, // * // * Removes dimensions of size 1 from the shape of a tensor. // * // * Given a tensor input, this operation returns a tensor of the same // * {@link OperandCode} with all dimensions of size 1 removed. If you don't // * want to remove all size 1 dimensions, you can remove specific size 1 // * dimensions by specifying the axes (input1). // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: An n-D tensor, the tensor to be squeezed. // * * 1: An optional 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The // * dimensions to squeeze. If specified only squeezes the dimensions // * listed. Otherwise, squeezes all dimensions. The dimension index // * starts at 0. An error must be reported if squeezing a dimension that // * is not 1. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. Contains the // * same data as input, but has one or more dimensions of size 1 // * removed. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * If all input dimensions are equal to 1 and are to be squeezed, the // * output shape is [1]. // * // * Available since NNAPI feature level 2. SQUEEZE = 34, // * // * Extracts a strided slice of a tensor. // * // * Roughly speaking, this op extracts a slice of size (end - begin) / stride // * from the given input tensor. Starting at the location specified by begin // * the slice continues by adding stride to the index until all dimensions // * are not less than end. Note that a stride can be negative, which causes a // * reverse slice. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: An n-D tensor, specifying the tensor to be sliced. // * * 1: begin, a 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The // * starts of the dimensions of the input tensor to be sliced. The // * length must be of rank(input0). // * * 2: end, a 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The // * ends of the dimensions of the input tensor to be sliced. The length // * must be of rank(input0). // * * 3: strides, a 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The // * strides of the dimensions of the input tensor to be sliced. The // * length must be of rank(input0). The entries must be non-zero. // * * 4: begin_mask, an {@link ANEURALNETWORKS_INT32} scalar. If the ith bit // * of begin_mask is set, begin[i] is ignored and the fullest possible // * range in that dimension is used instead. // * * 5: end_mask, an {@link ANEURALNETWORKS_INT32} scalar. If the ith bit of // * end_mask is set, end[i] is ignored and the fullest possible range in // * that dimension is used instead. // * * 6: shrink_axis_mask, an {@link ANEURALNETWORKS_INT32} scalar. If the // * ith bit of shrink_axis_mask is set, the ith dimension specification // * shrinks the dimensionality by 1, taking on the value at index // * begin[i]. In this case, the ith specification must define a // * slice of size 1, e.g. begin[i] = x, end[i] = x + 1. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0 and rank (n - k), // * where k is the number of bits set in shrink_axis_mask. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * If shrink_axis_mask is true for all input dimensions, the output // * shape is [1]. // * // * Available since NNAPI feature level 2. STRIDED_SLICE = 35, // * // * Element-wise subtraction of two tensors. // * // * Takes two input tensors of identical {@link OperandCode} and compatible // * dimensions. The output is the result of subtracting the second input // * tensor from the first one, optionally modified by an activation function. // * // * Two dimensions are compatible when: // * 1. they are equal, or // * 2. one of them is 1 // * // * The size of the output is the maximum size along each dimension of the // * input operands. It starts with the trailing dimensions, and works its way // * forward. // * // * Example: // * input1.dimension = {4, 1, 2} // * input2.dimension = {5, 4, 3, 1} // * output.dimension = {5, 4, 3, 2} // * // * Since NNAPI feature level 3, generic zero-sized input tensor is supported. Zero // * dimension is only compatible with 0 or 1. The size of the output // * dimension is zero if either of corresponding input dimension is zero. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * * {@link ANEURALNETWORKS_TENSOR_INT32} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: An n-D tensor, specifying the first input. // * * 1: A tensor of the same {@link OperandCode}, and compatible dimensions // * as input0. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * For a {@link ANEURALNETWORKS_TENSOR_INT32} tensor, // * the {@link FuseCode} must be "NONE". // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint can be different from inputs' scale and zeroPoint. // * // * Available since NNAPI feature level 2. SUB = 36, // * // * Transposes the input tensor, permuting the dimensions according to the // * perm tensor. // * // * The returned tensor's dimension i corresponds to the input dimension // * perm[i]. If perm is not given, it is set to (n-1...0), where n is the // * rank of the input tensor. Hence by default, this operation performs a // * regular matrix transpose on 2-D input Tensors. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} (since NNAPI feature level 3) // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: An n-D tensor, specifying the tensor to be transposed. // * Since NNAPI feature level 3, this tensor may be zero-sized. // * * 1: An optional 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, // * the permutation of the dimensions of the input tensor. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 2. TRANSPOSE = 37, // * // * Computes the absolute value of a tensor, element-wise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: A tensor. // * // * Outputs: // * * 0: The output tensor of same shape as input0. // * // * Available since NNAPI feature level 3. ABS = 38, // * // * Returns the index of the largest element along an axis. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * Inputs: // * * 0: An n-D tensor specifying the input. Must be non-empty. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar specifying the axis to // * reduce across. Negative index is used to specify axis from the // * end (e.g. -1 for the last axis). Must be in the range [-n, n). // * // * Outputs: // * * 0: An (n - 1)-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor. // * If input is 1-dimensional, the output shape is [1]. // * // * Available since NNAPI feature level 3. // // There is no underscore in ARG_MAX to avoid name conflict with // the macro defined in libc/kernel/uapi/linux/limits.h. ARGMAX = 39, // * // * Returns the index of the smallest element along an axis. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * Inputs: // * * 0: An n-D tensor specifying the input. Must be non-empty. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar specifying the axis to // * reduce across. Negative index is used to specify axis from the // * end (e.g. -1 for the last axis). Must be in the range [-n, n). // * // * Outputs: // * * 0: An (n - 1)-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor. // * If input is 1-dimensional, the output shape is [1]. // * // * Available since NNAPI feature level 3. ARGMIN = 40, // See ARGMAX for naming discussion. // * // * Transform axis-aligned bounding box proposals using bounding box deltas. // * // * Given the positions of bounding box proposals and the corresponding // * bounding box deltas for each class, return the refined bounding box // * regions. The resulting bounding boxes are cliped against the edges of // * the image. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM} // * // * Inputs: // * * 0: A 2-D Tensor of shape [num_rois, 4], specifying the locations of the // * bounding box proposals, each line with format [x1, y1, x2, y2]. // * For tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, // * the zeroPoint must be 0 and the scale must be 0.125. Zero num_rois // * is supported for this tensor. // * * 1: A 2-D Tensor of shape [num_rois, num_classes * 4], specifying the // * bounding box delta for each region of interest and each class. The // * bounding box deltas are organized in the following order // * [dx, dy, dw, dh], where dx and dy is the relative correction factor // * for the center position of the bounding box with respect to the width // * and height, dw and dh is the log-scale relative correction factor // * for the width and height. For input0 of type // * {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, this tensor should be // * of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} or // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}. Zero num_rois is // * supported for this tensor. // * * 2: An 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape // * [num_rois], specifying the batch index of each box. Boxes with // * the same batch index are grouped together. Zero num_rois is // * supported for this tensor. // * * 3: A 2-D Tensor of shape [batches, 2], specifying the information of // * each image in the batch, each line with format // * [image_height, image_width]. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0, with shape // * [num_rois, num_classes * 4], specifying the coordinates of each // * output bounding box for each class, with format [x1, y1, x2, y2]. // * For type of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, the // * scale must be 0.125 and the zero point must be 0. // * // * Available since NNAPI feature level 3. AXIS_ALIGNED_BBOX_TRANSFORM = 41, // * // * A recurrent neural network layer that applies an LSTM cell to a // * sequence of inputs in forward and backward directions. // * // * The op supports cross-linking via an auxiliary input. Regular cell feeds // * one input into the two RNN cells in the following way: // * // * INPUT (INPUT_REVERSED) // * | | // * --------------------- // * | FW_LSTM BW_LSTM | // * --------------------- // * | | // * FW_OUT BW_OUT // * // * An op with cross-linking takes two inputs and feeds them into the RNN // * cells in the following way: // * // * AUX_INPUT (AUX_INPUT_REVERSED) // * | | // * INPUT | (INPUT_R'D.)| // * | | | | // * ----------------------- // * | \ / \ / | // * | FW_LSTM BW_LSTM | // * ----------------------- // * | | // * FW_OUT BW_OUT // * // * The cross-linking mode is enabled iff auxiliary input and auxiliary // * weights are present. While stacking this op on top of itself, this // * allows to connect both forward and backward outputs from previous cell // * to the next cell's input. // * // * Since NNAPI feature level 4 parallel linking mode is supported. The mode is // * enabled if auxiliary input is present but auxiliary weights are omitted. // * In this case, the cell feeds inputs into the RNN in the following way: // * // * INPUT (AUX_INPUT_REVERSED) // * | | // * --------------------- // * | FW_LSTM BW_LSTM | // * --------------------- // * | | // * FW_OUT BW_OUT // * // * While stacking this op on top of itself, this allows to connect both // * forward and backward outputs from previous cell to the next cell's // * corresponding inputs. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: 3, either time-major or batch-major. // * // * All input and output tensors must be of the same type. // * // * Inputs: // * * 0: The input. // * A 3-D tensor of shape: // * If time-major: [max_time, batch_size, input_size] // * If batch-major: [batch_size, max_time, input_size] // * where "max_time" is the number of timesteps (sequence length), // * "batch_size" corresponds to the batching dimension, and // * "input_size" is the size of the input. // * * 1: The forward input-to-input weights. Optional. // * A 2-D tensor of shape [fw_num_units, input_size], where “fw_num_units” // * corresponds to the number of forward cell units. // * * 2: The forward input-to-forget weights. // * A 2-D tensor of shape [fw_num_units, input_size]. // * * 3: The forward input-to-cell weights. // * A 2-D tensor of shape [fw_num_units, input_size]. // * * 4: The forward input-to-output weights. // * A 2-D tensor of shape [fw_num_units, input_size]. // * * 5: The forward recurrent-to-input weights. Optional. // * A 2-D tensor of shape [fw_num_units, fw_output_size], where “fw_output_size” // * corresponds to either the number of cell units (i.e., fw_num_units), // * or the second dimension of the “fw_projection_weights”, if defined. // * * 6: The forward recurrent-to-forget weights. // * A 2-D tensor of shape [fw_num_units, fw_output_size]. // * * 7: The forward recurrent-to-cell weights. // * A 2-D tensor of shape [fw_num_units, fw_output_size]. // * * 8: The forward recurrent-to-output weights. // * A 2-D tensor of shape [fw_num_units, fw_output_size]. // * * 9: The forward cell-to-input weights. Optional. // * A 1-D tensor of shape [fw_num_units]. // * * 10: The forward cell-to-forget weights. Optional. // * A 1-D tensor of shape [fw_num_units]. // * * 11: The forward cell-to-output weights. Optional. // * A 1-D tensor of shape [fw_num_units]. // * * 12: The forward input gate bias. Optional. // * A 1-D tensor of shape [fw_num_units]. // * * 13: The forward forget gate bias. // * A 1-D tensor of shape [fw_num_units]. // * * 14: The forward cell gate bias. // * A 1-D tensor of shape [fw_num_units]. // * * 15: The forward output gate bias. // * A 1-D tensor of shape [fw_num_units]. // * * 16: The forward projection weights. Optional. // * A 2-D tensor of shape [fw_output_size, fw_num_units]. // * * 17: The forward projection bias. Optional. // * A 1-D tensor of shape [fw_output_size]. // * * 18: The backward input-to-input weights. Optional. // * A 2-D tensor of shape [bw_num_units, input_size], where “bw_num_units” // * corresponds to the number of backward cell units. // * * 19: The backward input-to-forget weights. // * A 2-D tensor of shape [bw_num_units, input_size]. // * * 20: The backward input-to-cell weights. // * A 2-D tensor of shape [bw_num_units, input_size]. // * * 21: The backward input-to-output weights. // * A 2-D tensor of shape [bw_num_units, input_size]. // * * 22: The backward recurrent-to-input weights. Optional. // * A 2-D tensor of shape [bw_num_units, bw_output_size], where “bw_output_size” // * corresponds to either the number of cell units (i.e., “bw_num_units”), // * or the second dimension of the “bw_projection_weights”, if defined. // * * 23: The backward recurrent-to-forget weights. // * A 2-D tensor of shape [bw_num_units, bw_output_size]. // * * 24: The backward recurrent-to-cell weights. // * A 2-D tensor of shape [bw_num_units, bw_output_size]. // * * 25: The backward recurrent-to-output weights. // * A 2-D tensor of shape [bw_num_units, bw_output_size]. // * * 26: The backward cell-to-input weights. Optional. // * A 1-D tensor of shape [bw_num_units]. // * * 27: The backward cell-to-forget weights. Optional. // * A 1-D tensor of shape [bw_num_units]. // * * 28: The backward cell-to-output weights. Optional. // * A 1-D tensor of shape [bw_num_units]. // * * 29: The backward input gate bias. Optional. // * A 1-D tensor of shape [bw_num_units]. // * * 30: The backward forget gate bias. // * A 1-D tensor of shape [bw_num_units]. // * * 31: The backward cell gate bias. // * A 1-D tensor of shape [bw_num_units]. // * * 32: The backward output gate bias. // * A 1-D tensor of shape [bw_num_units]. // * * 33: The backward projection weights. Optional. // * A 2-D tensor of shape [bw_output_size, bw_num_units]. // * * 34: The backward projection bias. Optional. // * A 1-D tensor of shape [bw_output_size]. // * * 35: The forward input activation state. // * A 2-D tensor of shape [batch_size, bw_output_size]. // * * 36: The forward input cell state. // * A 2-D tensor of shape [batch_size, bw_num_units]. // * * 37: The backward input activation state. // * A 2-D tensor of shape [batch_size, bw_output_size]. // * * 38: The backward input cell state. // * A 2-D tensor of shape [batch_size, bw_num_units]. // * * 39: The auxiliary input. Optional. // * A 3-D tensor of shape [max_time, batch_size, aux_input_size], // * where “batch_size” corresponds to the batching dimension, and // * “aux_input_size” is the size of the auxiliary input. Optional. See // * the docs above for the usage modes explanation. // * * 40: The forward auxiliary input-to-input weights. // * Optional. See the docs above for the usage modes explanation. // * A 2-D tensor of shape [fw_num_units, aux_input_size]. // * * 41: The forward auxiliary input-to-forget weights. // * Optional. See the docs above for the usage modes explanation. // * A 2-D tensor of shape [fw_num_units, aux_input_size]. // * * 42: The forward auxiliary input-to-cell weights. // * Optional. See the docs above for the usage modes explanation. // * A 2-D tensor of shape [fw_num_units, aux_input_size]. // * * 43: The forward auxiliary input-to-output weights. // * Optional. See the docs above for the usage modes explanation. // * A 2-D tensor of shape [fw_num_units, aux_input_size]. // * * 44: The backward auxiliary input-to-input weights. // * Optional. See the docs above for the usage modes explanation. // * A 2-D tensor of shape [bw_num_units, aux_input_size]. // * * 45: The backward auxiliary input-to-forget weights. // * Optional. See the docs above for the usage modes explanation. // * A 2-D tensor of shape [bw_num_units, aux_input_size]. // * * 46: The backward auxiliary input-to-cell weights. // * Optional. See the docs above for the usage modes explanation. // * A 2-D tensor of shape [bw_num_units, aux_input_size]. // * * 47: The backward auxiliary input-to-output weights. // * Optional. See the docs above for the usage modes explanation. // * A 2-D tensor of shape [bw_num_units, aux_input_size]. // * * 48: The activation function. // * A value indicating the activation function: // * <ul> // * <li>0: None; // * <li>1: Relu; // * <li>3: Relu6; // * <li>4: Tanh; // * <li>6: Sigmoid. // * </ul> // * * 49: The clipping threshold for the cell state, such // * that values are bound within [-cell_clip, cell_clip]. If set to 0.0 // * then clipping is disabled. // * If all the input tensors have type {@link ANEURALNETWORKS_TENSOR_FLOAT32}, // * this scalar must be of the type {@link ANEURALNETWORKS_FLOAT32}, // * otherwise if all the input tensors have the type // * {@link ANEURALNETWORKS_TENSOR_FLOAT16}, this scalar must be // * of type {@link ANEURALNETWORKS_FLOAT16}. // * * 50: The clipping threshold for the output from the // * projection layer, such that values are bound within // * [-proj_clip, proj_clip]. If set to 0.0 then clipping is disabled. // * If all the input tensors have type {@link ANEURALNETWORKS_TENSOR_FLOAT32}, // * this scalar must be of the type {@link ANEURALNETWORKS_FLOAT32}, // * otherwise if all the input tensors have the type // * {@link ANEURALNETWORKS_TENSOR_FLOAT16}, this scalar must be // * of type {@link ANEURALNETWORKS_FLOAT16}. // * * 51: merge_outputs // * An {@link ANEURALNETWORKS_BOOL} scalar specifying if the outputs // * from forward and backward cells should be merged. // * * 52: time_major // * An {@link ANEURALNETWORKS_BOOL} scalar specifying the shape format // * of input and output tensors. // * * 53: The forward input layer normalization weights. Optional. // * A 1-D tensor of shape [fw_num_units]. Used to rescale normalized inputs // * to activation at input gate. // * * 54: The forward forget layer normalization weights. Optional. // * A 1-D tensor of shape [fw_num_units]. Used to rescale normalized inputs // * to activation at forget gate. // * * 55: The forward cell layer normalization weights. Optional. // * A 1-D tensor of shape [fw_num_units]. Used to rescale normalized inputs // * to activation at cell gate. // * * 56: The forward output layer normalization weights. Optional. // * A 1-D tensor of shape [fw_num_units]. Used to rescale normalized inputs // * to activation at output gate. // * * 57: The backward input layer normalization weights. Optional. // * A 1-D tensor of shape [bw_num_units]. Used to rescale normalized inputs // * to activation at input gate. // * * 58: The backward forget layer normalization weights. Optional. // * A 1-D tensor of shape [bw_num_units]. Used to rescale normalized inputs // * to activation at forget gate. // * * 59: The backward cell layer normalization weights. Optional. // * A 1-D tensor of shape [bw_num_units]. Used to rescale normalized inputs // * to activation at cell gate. // * * 60: The backward output layer normalization weights. Optional. // * A 1-D tensor of shape [bw_num_units]. Used to rescale normalized inputs // * to activation at output gate. // * // * Outputs: // * * 0: The forward output. // * A 3-D tensor of shape: // * If time-major and not merge_outputs: // * [max_time, batch_size, fw_output_size] // * If time-major and merge_outputs: // * [max_time, batch_size, fw_output_size + bw_output_size] // * If batch-major and not merge_outputs: // * [batch_size, max_time, fw_output_size] // * If batch-major and merge_outputs: // * [batch_size, max_time, fw_output_size + bw_output_size] // * * 1: The backward output. Unused if merge_outputs is true. // * A 3-D tensor of shape: // * If time-major: [max_time, batch_size, bw_output_size] // * If batch-major: [batch_size, max_time, bw_output_size] // * * 2: The forward activation state output. // * A 2-D tensor of shape [batch_size, fw_output_size] containing an // * activation state from the last time step in the sequence. This // * output is optional and can be omitted. If this output is present // * then outputs 3-5 must be present as well. // * Available since NNAPI feature level 4. // * * 3: The forward cell state output. // * A tensor of shape [batch_size, fw_cell_size] containing a cell state // * from the last time step in the sequence. This output is optional // * and can be omitted. If this output is present // * then outputs 2, 4, 5 must be present as well. // * Available since NNAPI feature level 4. // * * 4: The backward activation state output. // * A 2-D tensor of shape [batch_size, bw_output_size] containing an // * activation state from the last time step in the sequence. This // * output is optional and can be omitted. If this output is present // * then outputs 2, 3, 5 must be present as well. // * Available since NNAPI feature level 4. // * * 5: The backward cell state output. // * A tensor of shape [batch_size, bw_cell_size] containing a cell state // * from the last time step in the sequence. This output is optional // * and can be omitted. If this output is present // * then outputs 2-4 must be present as well. // * Available since NNAPI feature level 4. // * // * Available since NNAPI feature level 3. // * // * Important: As of NNAPI feature level 3, there is no way to get the output state tensors out // * and NNAPI does not maintain internal states. This operator does not support the usage pattern // * in which multiple cells are chained and state tensors are propagated. BIDIRECTIONAL_SEQUENCE_LSTM = 42, // * // * A recurrent neural network layer that applies a basic RNN cell to a // * sequence of inputs in forward and backward directions. // * // * This Op unrolls the input along the sequence dimension, and implements // * the following operation for each element in the sequence s = // * 1...sequence_length: // * fw_outputs[s] = fw_state = activation(inputs[s] * fw_input_weights’ + // * fw_state * fw_recurrent_weights’ + fw_bias) // * // * And for each element in sequence t = sequence_length : 1 // * bw_outputs[t] = bw_state = activation(inputs[t] * bw_input_weights’ + // * bw_state * bw_recurrent_weights’ + bw_bias) // * // * Where: // * * “{fw,bw}_input_weights” is a weight matrix that multiplies the inputs; // * * “{fw,bw}_recurrent_weights” is a weight matrix that multiplies the // * current “state” which itself is the output from the previous time step // * computation; // * * “{fw,bw}_bias” is a bias vector (added to each output vector in the // * batch); // * * “activation” is the function passed as the “fused_activation_function” // * argument (if not “NONE”). // * // * The op supports cross-linking via an auxiliary input. Regular cell feeds // * one input into the two RNN cells in the following way: // * // * INPUT (INPUT_REVERSED) // * | | // * --------------------- // * | FW_RNN BW_RNN | // * --------------------- // * | | // * FW_OUT BW_OUT // * // * An op with cross-linking takes two inputs and feeds them into the RNN // * cells in the following way: // * // * AUX_INPUT (AUX_INPUT_REVERSED) // * | | // * INPUT | (INPUT_R'D.)| // * | | | | // * ----------------------- // * | \ / \ / | // * | FW_RNN BW_RNN | // * ----------------------- // * | | // * FW_OUT BW_OUT // * // * The cross-linking mode is enabled iff auxiliary input and auxiliary // * weights are present. While stacking this op on top of itself, this // * allows to connect both forward and backward outputs from previous cell // * to the next cell's input. // * // * Since NNAPI feature level 4 parallel linking mode is supported. The mode is // * enabled if auxiliary input is present but auxiliary weights are omitted. // * In this case, the cell feeds inputs into the RNN in the following way: // * // * INPUT (AUX_INPUT_REVERSED) // * | | // * --------------------- // * | FW_RNN BW_RNN | // * --------------------- // * | | // * FW_OUT BW_OUT // * // * While stacking this op on top of itself, this allows to connect both // * forward and backward outputs from previous cell to the next cell's // * corresponding inputs. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * The input tensors must all be the same type. // * // * Inputs: // * * 0: input. // * A 3-D tensor. The shape is defined by the input 6 (timeMajor). If // * it is set to true, then the input has a shape [maxTime, batchSize, // * inputSize], otherwise the input has a shape [batchSize, maxTime, // * inputSize]. // * * 1: fwWeights. // * A 2-D tensor of shape [fwNumUnits, inputSize]. // * * 2: fwRecurrentWeights. // * A 2-D tensor of shape [fwNumUnits, fwNumUnits]. // * * 3: fwBias. // * A 1-D tensor of shape [fwNumUnits]. // * * 4: fwHiddenState. // * A 2-D tensor of shape [batchSize, fwNumUnits]. Specifies a hidden // * state input for the first time step of the computation. // * * 5: bwWeights. // * A 2-D tensor of shape [bwNumUnits, inputSize]. // * * 6: bwRecurrentWeights. // * A 2-D tensor of shape [bwNumUnits, bwNumUnits]. // * * 7: bwBias. // * A 1-D tensor of shape [bwNumUnits]. // * * 8: bwHiddenState // * A 2-D tensor of shape [batchSize, bwNumUnits]. Specifies a hidden // * state input for the first time step of the computation. // * * 9: auxInput. // * A 3-D tensor. The shape is defined by the input 6 (timeMajor). If // * it is set to true, then the input has a shape [maxTime, batchSize, // * auxInputSize], otherwise the input has a shape [batchSize, maxTime, // * auxInputSize]. Can be omitted. See the docs above for the usage // * modes explanation. // * * 10:fwAuxWeights. // * A 2-D tensor of shape [fwNumUnits, auxInputSize]. Can be omitted. // * See the docs above for the usage modes explanation. // * * 11:bwAuxWeights. // * A 2-D tensor of shape [bwNumUnits, auxInputSize]. Can be omitted. // * See the docs above for the usage modes explanation. // * * 12:fusedActivationFunction. // * A {@link FuseCode} value indicating the activation function. If // * “NONE” is specified then it results in a linear activation. // * * 13:timeMajor // * An {@link ANEURALNETWORKS_BOOL} scalar specifying the shape format // * of input and output tensors. // * * 14:mergeOutputs // * An {@link ANEURALNETWORKS_BOOL} scalar specifying if the outputs // * from forward and backward cells are separate (if set to false) or // * concatenated (if set to true). // * Outputs: // * * 0: fwOutput. // * A 3-D tensor. The first two dimensions of the shape are defined by // * the input 6 (timeMajor) and the third dimension is defined by the // * input 14 (mergeOutputs). If timeMajor is set to true, then the first // * two dimensions are [maxTime, batchSize], otherwise they are set to // * [batchSize, maxTime]. If mergeOutputs is set to true, then the third // * dimension is equal to (fwNumUnits + bwNumUnits), otherwise it is set // * to fwNumUnits. // * * 1: bwOutput. // * A 3-D tensor. If the input 14 (mergeOutputs) is set to true, then // * this tensor is not produced. The shape is defined by the input 6 // * (timeMajor). If it is set to true, then the shape is set to // * [maxTime, batchSize, bwNumUnits], otherwise the shape is set to // * [batchSize, maxTime, bwNumUnits]. // * * 2: The forward hidden state output. // * A 2-D tensor of shape [batchSize, fwNumUnits] containing a hidden // * state from the last time step in the sequence. This output is // * optional and can be omitted. If this output is present then output // * 3 must be present as well. // * Available since NNAPI feature level 4. // * * 3: The backward hidden state output. // * A 2-D tensor of shape [batchSize, bwNumUnits] containing a hidden // * state from the last time step in the sequence. This output is // * optional and can be omitted. If this output is present then output // * 2 must be present as well. // * Available since NNAPI feature level 4. // * // * Available since NNAPI feature level 3. // * // * Important: As of NNAPI feature level 3, there is no way to get the output state tensors out // * and NNAPI does not maintain internal states. This operator does not support the usage pattern // * in which multiple cells are chained and state tensors are propagated. BIDIRECTIONAL_SEQUENCE_RNN = 43, // * // * Greedily selects a subset of bounding boxes in descending order of score. // * // * This op applies NMS algorithm to each class. In each loop of execution, // * the box with maximum score gets selected and removed from the pending set. // * The scores of the rest of boxes are lowered according to the // * intersection-over-union (IOU) overlapping with the previously selected // * boxes and a specified NMS kernel method. Any boxes with score less // * than a threshold are removed from the pending set. // * // * Three NMS kernels are supported: // * * Hard: score_new = score_old * (1 if IoU < threshold else 0) // * * Linear: score_new = score_old * (1 if IoU < threshold else 1 - IoU) // * * Gaussian: score_new = score_old * exp(- IoU^2 / sigma) // * // * Axis-aligned bounding boxes are represented by its upper-left corner // * coordinate (x1,y1) and lower-right corner coordinate (x2,y2). A valid // * bounding box should satisfy x1 <= x2 and y1 <= y2. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Inputs: // * * 0: A 2-D Tensor of shape [num_rois, num_classes], specifying the score // * of each bounding box proposal. The boxes are grouped by batches in the // * first dimension. Zero num_rois is supported for this tensor. // * * 1: A 2-D Tensor specifying the bounding boxes of shape // * [num_rois, num_classes * 4], organized in the order [x1, y1, x2, y2]. // * The boxes are grouped by batches in the first dimension. The sequential // * order of the boxes corresponds with input0. For input0 of type // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, this tensor should be of // * {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, with zeroPoint of 0 and // * scale of 0.125. // * For input0 of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * this tensor should be of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, // * with zeroPoint of -128 and scale of 0.125. // * Zero num_rois is supported for this tensor. // * * 2: A 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape // * [num_rois], specifying the batch index of each box. Boxes with // * the same batch index are grouped together. // * * 3: An {@link ANEURALNETWORKS_FLOAT32} scalar, score_threshold. Boxes // * with scores lower than the threshold are filtered before sending // * to the NMS algorithm. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the maximum // * number of selected bounding boxes for each image. Set to a negative // * value for unlimited number of output bounding boxes. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the NMS // * kernel method, options are 0:hard, 1:linear, 2:gaussian. // * * 6: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the IoU // * threshold in hard and linear NMS kernel. This field is ignored if // * gaussian kernel is selected. // * * 7: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the sigma in // * gaussian NMS kernel. This field is ignored if gaussian kernel is // * not selected. // * * 8: An {@link ANEURALNETWORKS_FLOAT32} scalar, nms_score_threshold. // * Boxes with scores lower than the threshold are dropped during the // * score updating phase in soft NMS. // * // * Outputs: // * * 0: A 1-D Tensor of the same {@link OperandCode} as input0, with shape // * [num_output_rois], specifying the score of each output box. The boxes // * are grouped by batches, but the sequential order in each batch is not // * guaranteed. For type of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, // * guaranteed. For type of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * or {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * the scale and zero point must be the same as input0. // * * 1: A 2-D Tensor of the same {@link OperandCode} as input1, with shape // * [num_output_rois, 4], specifying the coordinates of each // * output bounding box with the same format as input1. The sequential // * order of the boxes corresponds with output0. For type of // * {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, the scale must be // * 0.125 and the zero point must be 0. // * * 2: A 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape // * [num_output_rois], specifying the class of each output box. The // * sequential order of the boxes corresponds with output0. // * * 3: A 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape // * [num_output_rois], specifying the batch index of each box. Boxes // * with the same batch index are grouped together. // * // * Available since NNAPI feature level 3. BOX_WITH_NMS_LIMIT = 44, // * // * Casts a tensor to a type. // * // * This operation ignores the scale and zeroPoint of quanized tensors, // * e.g. it treats a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} input // * as a tensor of uint8 values. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * Since NNAPI feature level 4, casting tensors of the following // * {@link OperandCode} to the same {@link OperandCode} is supported: // * * {@link ANEURALNETWORKS_TENSOR_BOOL8} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM} // * // * Supported tensor rank: from 1 // * // * Inputs: // * * 0: A tensor. // * // * Outputs: // * * 0: A tensor with the same shape as input0. // * // * Available since NNAPI feature level 3. CAST = 45, // * // * Shuffle the channels of the input tensor. // * // * Given an input tensor and a integer value of num_groups, CHANNEL_SHUFFLE // * divide the channel dimension into num_groups groups, and reorganize the // * channels by grouping channels with the same index in each group. // * // * Along the channel dimension, the output is calculated using this formula: // * // * output_channel[k * num_groups + g] = input_channel[g * group_size + k] // * // * where group_size = num_channels / num_groups // * // * The number of channels must be divisible by num_groups. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: An n-D tensor, specifying the tensor to be shuffled. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the number of // * groups. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the dimension // * channel shuffle would be performed on. Negative index is used to // * specify axis from the end (e.g. -1 for the last axis). Must be in // * the range [-n, n). // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} and same shape as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 3. CHANNEL_SHUFFLE = 46, // * // * Apply postprocessing steps to bounding box detections. // * // * Bounding box detections are generated by applying transformation on a set // * of predefined anchors with the bounding box deltas from bounding box // * regression. A final step of hard NMS is applied to limit the number of // * returned boxes. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Inputs: // * * 0: A 3-D Tensor of shape [batches, num_anchors, num_classes], specifying // * the score of each anchor with each class. Class 0 for each // * [batches, num_anchors, 0] is background and will be ignored. // * * 1: A 3-D Tensor of shape [batches, num_anchors, length_box_encoding], with // * the first four values in length_box_encoding specifying the bounding // * box deltas. The box deltas are encoded in the order of [dy, dx, dh, dw], // * where dy and dx is the linear-scale relative correction factor for the // * center position of the bounding box with respect to the width and height, // * dh and dw is the log-scale relative correction factor for the width and // * height. All the entries in length_box_encoding beyond the first four // * values are ignored in this operation. // * * 2: A 2-D Tensor of shape [num_anchors, 4], specifying the shape of each // * predefined anchor, with format [ctr_y, ctr_x, h, w], where ctr_y and // * ctr_x are the center position of the box, and h and w are the height // * and the width. // * * 3: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the scaling // * factor for dy in bounding box deltas. // * * 4: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the scaling // * factor for dx in bounding box deltas. // * * 5: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the scaling // * factor for dh in bounding box deltas. // * * 6: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the scaling // * factor for dw in bounding box deltas. // * * 7: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to use regular // * multi-class NMS algorithm that do NMS separately for each class, // * set to false for a faster algorithm that only do one single NMS // * using the highest class score.. // * * 8: An {@link ANEURALNETWORKS_INT32} scalar, max_num_detections, specifying // * the maximum number of boxes for the output. Boxes with the lowest // * scores are discarded to meet the limit. // * * 9: An {@link ANEURALNETWORKS_INT32} scalar, only used when input7 is // * set to false, specifying the maximum number of classes per detection. // * * 10: An {@link ANEURALNETWORKS_INT32} scalar, only used when input7 is // * set to true, specifying the maximum number of detections when // * applying NMS algorithm for each single class. // * * 11: A scalar, score_threshold. Boxes with scores lower than the // * threshold are filtered before sending to the NMS algorithm. The // * scalar must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is of // * {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of // * {@link ANEURALNETWORKS_FLOAT32} if input0 is of // * {@link ANEURALNETWORKS_TENSOR_FLOAT32}. // * * 12: A scalar, specifying the IoU threshold for hard NMS. The scalar // * must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is of // * {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of // * {@link ANEURALNETWORKS_FLOAT32} if input0 is of // * {@link ANEURALNETWORKS_TENSOR_FLOAT32}. // * * 13: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to include // * background class in the list of label map for the output, set // * to false to not include the background. When the background // * class is included, it has label 0 and the output classes start // * at 1 in the label map, otherwise, the output classes start at 0. // * // * Outputs: // * * 0: A 2-D tensor of the same {@link OperandCode} as input0, with shape // * [batches, max_num_detections], specifying the score of each output // * detections. // * * 1: A 3-D tensor of shape [batches, max_num_detections, 4], specifying the // * coordinates of each output bounding box, with format // * [y1, x1, y2, x2]. // * * 2: A 2-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape // * [batches, max_num_detections], specifying the class label for each // * output detection. // * * 3: An 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape [batches], // * specifying the number of valid output detections for each batch. // * // * Available since NNAPI feature level 3. DETECTION_POSTPROCESSING = 47, // * // * For input tensors x and y, computes x == y elementwise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_BOOL8} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * This operation supports broadcasting. // * // * Inputs: // * * 0: A tensor. // * * 1: A tensor of the same {@link OperandCode} and dimensions compatible // * with input0. // * // * Outputs: // * * 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}. // * // * Available since NNAPI feature level 3. EQUAL = 48, // * // * Computes exponential of x element-wise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: A tensor. // * // * Outputs: // * * 0: The output tensor of same shape as input0. // * // * Available since NNAPI feature level 3. EXP = 49, // * // * Inserts a dimension of 1 into a tensor's shape. // * // * Given a tensor input, this operation inserts a dimension of 1 at the // * given dimension index of input's shape. The dimension index starts at // * zero; if you specify a negative dimension index, it is counted backward // * from the end. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * Inputs: // * * 0: An n-D tensor. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar specifying the dimension // * index to expand. Must be in the range [-(n + 1), (n + 1)). // * // * Outputs: // * * 0: An (n + 1)-D tensor with the same {@link OperandCode} and data as // * input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 3. EXPAND_DIMS = 50, // * // * Gathers values along an axis. // * // * Produces an output tensor with shape // * input0.dimension[:axis] + indices.dimension + input0.dimension[axis + 1:] // * where: // * # Vector indices (output is rank(input0)). // * output[a_0, ..., a_n, i, b_0, ..., b_n] = // * input0[a_0, ..., a_n, indices[i], b_0, ..., b_n] // * // * # Higher rank indices (output is rank(input0) + rank(indices) - 1). // * output[a_0, ..., a_n, i, ..., j, b_0, ... b_n] = // * input0[a_0, ..., a_n, indices[i, ..., j], b_0, ..., b_n] // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * Inputs: // * * 0: An n-D tensor from which to gather values. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar specifying the axis. // * Negative index is used to specify axis from the end // * (e.g. -1 for the last axis). Must be in the range [-n, n). // * * 2: A k-D tensor {@link ANEURALNETWORKS_TENSOR_INT32} of indices. // * The values must be in the bounds of the corresponding dimensions // * of input0. // * // * Outputs: // * * 0: An (n + k - 1)-D tensor with the same {@link OperandCode} as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 3. GATHER = 51, // * // * Generate aixs-aligned bounding box proposals. // * // * Bounding box proposals are generated by applying transformation on a set // * of predefined anchors with the bounding box deltas from bounding box // * regression. A final step of hard NMS is applied to limit the number of // * returned boxes. // * // * Axis-aligned bounding boxes are represented by its upper-left corner // * coordinate (x1,y1) and lower-right corner coordinate (x2,y2). A valid // * bounding box should satisfy x1 <= x2 and y1 <= y2. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Inputs: // * * 0: A 4-D Tensor specifying the score of each anchor at each // * location. With "NHWC" data layout, the tensor shape is // * [batches, height, width, num_anchors]. With "NCHW" data layout, // * the tensor shape is [batches, num_anchors, height, width]. // * * 1: A 4-D Tensor specifying the bounding box deltas. With "NHWC" data // * layout, the tensor shape is [batches, height, width, num_anchors * 4]. // * With "NCHW" data layout, the tensor shape is // * [batches, num_anchors * 4, height, width]. The box deltas are encoded // * in the order of [dx, dy, dw, dh], where dx and dy is the linear-scale // * relative correction factor for the center position of the bounding box // * with respect to the width and height, dw and dh is the log-scale // * relative correction factor for the width and height. The last // * dimensions is the channel dimension. // * * 2: A 2-D Tensor of shape [num_anchors, 4], specifying the shape of each // * predefined anchor, with format [x1, y1, x2, y2]. For input0 of type // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} or // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, this tensor should be of // * {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}, with scale of 0.125. // * * 3: A 2-D Tensor of shape [batches, 2], specifying the size of // * each image in the batch, with format [image_height, image_width]. // * For input0 of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} or // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, this // * tensor should be of {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM}, with // * scale of 0.125. // * * 4: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the ratio // * from the height of original image to the height of feature map. // * * 5: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the ratio // * from the width of original image to the width of feature map. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the maximum // * number of boxes before going into the hard NMS algorithm. Boxes // * with the lowest scores are discarded to meet the limit. Set to // * a non-positive value for unlimited number. // * * 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the maximum // * number of boxes returning from the hard NMS algorithm. Boxes // * with the lowest scores are discarded to meet the limit. Set to // * a non-positive value for unlimited number. // * * 8: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the IoU // * threshold for hard NMS. // * * 9: An {@link ANEURALNETWORKS_FLOAT32} scalar, min_size. Boxes with // * height or width lower than the absolute threshold are filtered out. // * * 10: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify // * NCHW data layout for input0 and input1. Set to false for NHWC. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0, of shape // * [num_output_rois], specifying the score of each output box. // * The boxes are grouped by batches, but the sequential order in // * each batch is not guaranteed. For type of // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} or // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, the scale and zero // * point must be the same as input0. // * * 1: A tensor of the same {@link OperandCode} as input3, of shape // * [num_output_rois, 4], specifying the coordinates of each output // * bounding box for each class, with format [x1, y1, x2, y2]. // * The sequential order of the boxes corresponds with output0. // * For type of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, the // * scale must be 0.125 and the zero point must be 0. // * * 2: A 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape // * [num_output_rois], specifying the batch index of each box. Boxes // * with the same batch index are grouped together. // * // * Available since NNAPI feature level 3. GENERATE_PROPOSALS = 52, // * // * For input tensors x and y, computes x > y elementwise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_BOOL8} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * This operation supports broadcasting. // * // * Inputs: // * * 0: A tensor. // * * 1: A tensor of the same {@link OperandCode} and dimensions compatible // * with input0. // * // * Outputs: // * * 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}. // * // * Available since NNAPI feature level 3. ANEURALNETWORKS_GREATER = 53, // * // * For input tensors x and y, computes x >= y elementwise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_BOOL8} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * This operation supports broadcasting. // * // * Inputs: // * * 0: A tensor. // * * 1: A tensor of the same {@link OperandCode} and dimensions compatible // * with input0. // * // * Outputs: // * * 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}. // * // * Available since NNAPI feature level 3. GREATER_EQUAL = 54, // * // * Performs a grouped 2-D convolution operation. // * // * Given an input tensor of shape [batches, height, width, depth_in] and a // * filter tensor of shape [depth_out, filter_height, filter_width, depth_group] // * containing depth_out convolutional filters of depth depth_group, GROUPED_CONV // * applies a group of different filters to each input channel group, then // * concatenates the results together. // * // * Specifically, the input channels are divided into num_groups groups, each with // * depth depth_group, i.e. depth_in = num_groups * depth_group. The convolutional // * filters are also divided into num_groups groups, i.e. depth_out is divisible // * by num_groups. GROUPED_CONV applies each group of filters to the corresponding // * input channel group, and the result are concatenated together. // * // * The output dimensions are functions of the filter dimensions, stride, and // * padding. // * // * The values in the output tensor are computed as: // * // * output[b, i, j, g * channel_multiplier + q] = // * sum_{di, dj, dk} ( // * input[b, strides[1] * i + di, strides[2] * j + dj, // * g * depth_group + dk] * // * filter[g * channel_multiplier + q, di, dj, dk] // * ) + bias[channel] // * // * where channel_multiplier = depth_out / num_groups // * // * Supported tensor {@link OperandCode} configurations: // * * 16 bit floating point: // * * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} for input, filter, output, and bias. // * // * * 32 bit floating point: // * * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} for input, filter, output, and bias. // * // * * Quantized: // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, filter, and output. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to // * * * input.scale * filter.scale). // * // * * Quantized signed (since NNAPI feature level 4): // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, filter, and output. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to // * * * input.scale * filter.scale). // * // * * Quantized with symmetric per channel quantization for the filter: // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, and output. // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0, // * * * each value scaling is separate and equal to input.scale * filter.scales[channel]). // * // * * Quantized signed with filter symmetric per channel quantization // * (since NNAPI feature level 4): // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, and output. // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0, // * * * each value scaling is separate and equal to input.scale * filter.scales[channel]). // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * // * Both explicit padding and implicit padding are supported. // * // * Inputs (explicit padding): // * * 0: A 4-D tensor, of shape [batches, height, width, depth_in], // * specifying the input, where depth_in = num_groups * depth_group. // * * 1: A 4-D tensor, of shape // * [depth_out, filter_height, filter_width, depth_group], specifying // * the filter, where depth_out must be divisible by num_groups. For // * tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} // * the channel dimension (channelDim at // * {@link ANeuralNetworksSymmPerChannelQuantParams}) must be set to 0. // * * 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input // * tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32} or // * {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the bias must be of the same type. // * For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} // * the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint // * of 0 and bias_scale == input_scale * filter_scale. For filter tensor // * of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}, the bias // * should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of // * 0 and bias_scale of 0. The actual scale of each value 'i' is equal to // * bias_scale[i] = input_scale * filter_scale[i]. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the left, in the ‘width’ dimension. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the right, in the ‘width’ dimension. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the top, in the ‘height’ dimension. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the bottom, in the ‘height’ dimension. // * * 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘width’ dimension. // * * 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘height’ dimension. // * * 9: An {@link ANEURALNETWORKS_INT32} scalar, specifying the number of // * groups. // * * 10: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * * 11: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify // * NCHW data layout for input0 and output0. Set to false for NHWC. // * // * Inputs (implicit padding): // * * 0: A 4-D tensor, of shape [batches, height, width, depth_in], // * specifying the input, where depth_in = num_groups * depth_group. // * * 1: A 4-D tensor, of shape // * [depth_out, filter_height, filter_width, depth_group], specifying // * the filter, where depth_out must be divisible by num_groups. For // * tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} // * the channel dimension (ANeuralNetworksSymmPerChannelQuantParams::channelDim) // * must be set to 0. // * * 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input // * tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32} or // * {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the bias must be of the same // * {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the bias must be of the same type. // * For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} // * the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint // * of 0 and bias_scale == input_scale * filter_scale. For filter tensor // * of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}, the bias // * should be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of // * 0 and bias_scale of 0. The actual scale of each value 'i' is equal to // * bias_scale[i] = input_scale * filter_scale[i]. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit // * padding scheme, has to be one of the // * {@link PaddingCode} values. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘width’ dimension. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘height’ dimension. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the number of // * groups. // * * 7: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * * 8: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify // * NCHW data layout for input0 and output0. Set to false for NHWC. // * // * Outputs: // * * 0: The output 4-D tensor, of shape // * [batches, out_height, out_width, depth_out]. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint can be different from inputs' scale and zeroPoint. // * // * Available since NNAPI feature level 3. GROUPED_CONV_2D = 55, // * // * Localize the maximum keypoints from heatmaps. // * // * This operation approximates the accurate maximum keypoint scores and // * indices after bicubic upscaling by using Taylor expansion up to the // * quadratic term. // * // * The bounding box is represented by its upper-left corner coordinate // * (x1,y1) and lower-right corner coordinate (x2,y2) in the original image. // * A valid bounding box should satisfy x1 <= x2 and y1 <= y2. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * // * Inputs: // * * 0: A 4-D Tensor of shape // * [num_boxes, heatmap_size, heatmap_size, num_keypoints], // * specifying the heatmaps, the height and width of heatmaps should // * be the same, and must be greater than or equal to 2. // * * 1: A 2-D Tensor of shape [num_boxes, 4], specifying the bounding boxes, // * each with format [x1, y1, x2, y2]. For input0 of type // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, this tensor should // * be of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, with zeroPoint // * of 0 and scale of 0.125. // * For input0 of type // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, this tensor // * should be of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, with // * zeroPoint of -128 and scale of 0.125. // * * 2: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify // * NCHW data layout for input0. Set to false for NHWC. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0, with shape // * [num_boxes, num_keypoints], specifying score of the keypoints. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} or // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint can be different from input0 scale and zeroPoint. // * * 1: A tensor of the same {@link OperandCode} as input1, with shape // * [num_boxes, num_keypoints, 2], specifying the location of // * the keypoints, the second dimension is organized as // * [keypoint_x, keypoint_y]. // * For type of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, the // * scale must be 0.125 and the zero point must be 0. // * // * Available since NNAPI feature level 3. HEATMAP_MAX_KEYPOINT = 56, // * // * Applies instance normalization to the input tensor. // * // * The values in the output tensor are computed as: // * // * output[b, h, w, c] = // * (input[b, h, w, c] - mean[b, c]) * gamma / // * sqrt(var[b, c] + epsilon) + beta // * // * Where the mean and variance are computed across the spatial dimensions: // * // * mean[b, c] = // * sum_{h, w}(input[b, h, w, c]) / sum(1) // * // * var[b, c] = // * sum_{h, w}(pow(input[b, h, w, c] - mean[b, c], 2)) / sum(1) // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * // * Inputs: // * * 0: An n-D tensor, specifying the tensor to be normalized. // * * 1: A scalar, specifying gamma, the scale applied to the normalized // * tensor. The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if // * input0 is of {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of // * {@link ANEURALNETWORKS_FLOAT32} if input0 is of // * {@link ANEURALNETWORKS_TENSOR_FLOAT32}. // * * 2: A scalar, specifying beta, the offset applied to the normalized // * tensor. The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if // * input0 is of {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of // * {@link ANEURALNETWORKS_FLOAT32} if input0 is of // * {@link ANEURALNETWORKS_TENSOR_FLOAT32}. // * * 3: A scalar, specifying epsilon, the small value added to variance to // * avoid dividing by zero. The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if // * input0 is of {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of // * {@link ANEURALNETWORKS_FLOAT32} if input0 is of // * {@link ANEURALNETWORKS_TENSOR_FLOAT32}. // * * 4: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify // * NCHW data layout for input0 and output0. Set to false for NHWC. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} and same shape as input0. // * // * Available since NNAPI feature level 3. INSTANCE_NORMALIZATION = 57, // * // * For input tensors x and y, computes x < y elementwise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_BOOL8} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * This operation supports broadcasting. // * // * Inputs: // * * 0: A tensor. // * * 1: A tensor of the same {@link OperandCode} and dimensions compatible // * with input0. // * // * Outputs: // * * 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}. // * // * Available since NNAPI feature level 3. LESS = 58, // * // * For input tensors x and y, computes x <= y elementwise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_BOOL8} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * This operation supports broadcasting. // * // * Inputs: // * * 0: A tensor. // * * 1: A tensor of the same {@link OperandCode} and dimensions compatible // * with input0. // * // * Outputs: // * * 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}. // * // * Available since NNAPI feature level 3. LESS_EQUAL = 59, // * // * Computes natural logarithm of x element-wise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: A tensor. // * // * Outputs: // * * 0: The output tensor of same shape as input0. // * // * Available since NNAPI feature level 3. LOG = 60, // * // * Returns the truth value of x AND y element-wise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_BOOL8} // * // * Supported tensor rank: from 1 // * // * This operation supports broadcasting. // * // * Inputs: // * * 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}. // * * 1: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8} and dimensions // * compatible with input0. // * // * Outputs: // * * 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}. // * // * Available since NNAPI feature level 3. LOGICAL_AND = 61, // * // * Computes the truth value of NOT x element-wise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_BOOL8} // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: A tensor. // * // * Outputs: // * * 0: The output tensor of same shape as input0. // * // * Available since NNAPI feature level 3. LOGICAL_NOT = 62, // * // * Returns the truth value of x OR y element-wise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_BOOL8} // * // * Supported tensor rank: from 1 // * // * This operation supports broadcasting. // * // * Inputs: // * * 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}. // * * 1: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8} and dimensions // * compatible with input0. // * // * Outputs: // * * 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}. // * // * Available since NNAPI feature level 3. LOGICAL_OR = 63, // * // * Computes the log softmax activations given logits. // * // * The output is calculated using this formula: // * // * output = logits * beta - log(reduce_sum(exp(logits * beta), axis)) // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: A tensor specifying the input logits. // * * 1: A scalar, specifying the positive scaling factor for the exponent, // * beta. // * For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the beta // * value must be of {@link ANEURALNETWORKS_FLOAT16}. // * For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, the beta // * value must be of {@link ANEURALNETWORKS_FLOAT32}. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar specifying the axis to // * reduce across. Negative index is used to specify axis from the // * end (e.g. -1 for the last axis). Must be in the range [-n, n). // * // * Outputs: // * * 0: The output tensor of the same {@link OperandCode} and shape as // * input0. // * // * Available since NNAPI feature level 3. LOG_SOFTMAX = 64, // * // * Returns the element-wise maximum of two tensors. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: A tensor. // * * 1: A tensor of the same {@link OperandCode} and compatible dimensions // * with input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor, // * the scales and zeroPoint can be different from input0 scale and zeroPoint. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor, // * the scale and zeroPoint can be different from inputs' scale and zeroPoint. // * // * Available since NNAPI feature level 3. MAXIMUM = 65, // * // * Returns the element-wise minimum of two tensors. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: A tensor. // * * 1: A tensor of the same {@link OperandCode} and compatible dimensions // * with input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor, // * the scales and zeroPoint can be different from input0 scale and zeroPoint. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor, // * the scale and zeroPoint can be different from inputs' scale and zeroPoint. // * // * Available since NNAPI feature level 3. MINIMUM = 66, // * // * Computes numerical negative value element-wise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: A tensor. // * // * Outputs: // * * 0: The output tensor of same shape as input0. // * // * Available since NNAPI feature level 3. NEG = 67, // * // * For input tensors x and y, computes x != y elementwise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_BOOL8} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * This operation supports broadcasting. // * // * Inputs: // * * 0: A tensor. // * * 1: A tensor of the same {@link OperandCode} and dimensions compatible // * with input0. // * // * Outputs: // * * 0: A tensor of {@link ANEURALNETWORKS_TENSOR_BOOL8}. // * // * Available since NNAPI feature level 3. NOT_EQUAL = 68, // * // * Pads a tensor with the given constant value according to the specified // * paddings. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: An n-D tensor, specifying the tensor to be padded. // * * 1: A 2-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the paddings // * for each spatial dimension of the input tensor. The shape of the // * tensor must be {rank(input0), 2}. // * padding[i, 0] specifies the number of elements to be padded in the // * front of dimension i. // * padding[i, 1] specifies the number of elements to be padded after // * the end of dimension i. // * * 2: A scalar specifying the value to use for padding input0. // * For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the // * pad value must be of {@link ANEURALNETWORKS_FLOAT16}. // * For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, the // * pad value must be of {@link ANEURALNETWORKS_FLOAT32}. // * For input tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * the pad value must be of {@link ANEURALNETWORKS_INT32}. The // * scale and zeroPoint are assumed to be the same as in input0. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. The // * output tensor has the same rank as input0, and each // * dimension of the output tensor has the same size as the // * corresponding dimension of the input tensor plus the size // * of the padding: // * output0.dimension[i] = // * padding[i, 0] + input0.dimension[i] + padding[i, 1] // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 3. PAD_V2 = 69, // * // * Computes the power of one value to another. // * // * Given a tensor base and a tensor exponent, this operation computes // * base^exponent elementwise. // * // * This operations supports broadcasting. The size of the output is the // * maximum size along each dimension of the input operands. It starts with // * the trailing dimensions, and works its way forward. // * // * For example: // * base.dimension = {4, 1, 2} // * exponent.dimension = {5, 4, 3, 1} // * output.dimension = {5, 4, 3, 2} // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: from 1 // * // * Inputs: // * * 0: A tensor specifying the base. // * * 1: A tensor specifying the exponent. // * // * Outputs: // * * 0: An output tensor. // * // * Available since NNAPI feature level 3. POW = 70, // * // * Parametric Rectified Linear Unit. // * // * It follows: f(x) = alpha * x for x < 0, f(x) = x for x >= 0, where alpha // * is a learned array with the same {@link OperandCode} and compatible // * dimensions as input x. // * // * Two dimensions are compatible when: // * 1. they are equal, or // * 2. one of them is 1 // * // * The size of the output is the maximum size along each dimension of the // * input operands. It starts with the trailing dimensions, and works its way // * forward. // * // * Example: // * input.dimension = {4, 1, 2} // * alpha.dimension = {5, 4, 3, 1} // * output.dimension = {5, 4, 3, 2} // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * Inputs: // * * 0: A tensor, specifying the input. // * * 1: A tensor of the same {@link OperandCode}, and compatible dimensions // * as input0, specifying the alpha. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scales and zeroPoint can be different from input0 scale and zeroPoint. // * // * Available since NNAPI feature level 3. PRELU = 71, // * // * Quantizes the input tensor. // * // * The formula for {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} output tensor is: // * // * output = max(0, min(255, round(input / scale) + zeroPoint) // * // * The formula for {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} output // * tensor is: // * // * output = max(-128, min(127, round(input / scale) + zeroPoint) // * // * Supported input tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported output tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * Inputs: // * * 0: A tensor, may be zero-sized. // * // * Outputs: // * * 0: The output tensor of same shape as input0, but with // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} or. // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}. // * // * Available since NNAPI feature level 3. QUANTIZE = 72, // * // * A version of quantized LSTM, using 16 bit quantization for internal // * state. // * // * There is no projection layer, so cell state size is equal to the output // * size. // * // * Inputs: // * * 0: A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and shape [numBatches, inputSize] specifying the input to the LSTM // * cell. Tensor is quantized with a fixed quantization range of // * [-1, 127/128] (scale = 1/128, zeroPoint = 128). // * * 1: The input-to-input weights. // * A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and shape [outputSize, inputSize] specifying input-to-input part of // * weights for fully-connected layer inside the LSTM cell. // * Quantization zero point and scale must be the same across all the // * weights. // * * 2: The input-to-forget weights. // * A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and shape [outputSize, inputSize] specifying input-to-forget part of // * weights for fully-connected layer inside the LSTM cell. // * Quantization zero point and scale must be the same across all the // * weights. // * * 3: The input-to-cell weights. // * A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and shape [outputSize, inputSize] specifying input-to-cell part of // * weights for fully-connected layer inside the LSTM cell. // * Quantization zero point and scale must be the same across all the // * weights. // * * 4: The input-to-output weights. // * A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and shape [outputSize, inputSize] specifying input-to-output part of // * weights for fully-connected layer inside the LSTM cell. // * Quantization zero point and scale must be the same across all the // * weights. // * * 5: The recurrent-to-input weights. // * A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and shape [outputSize, outputSize] specifying recurrent-to-input part // * of weights for fully-connected layer inside the LSTM cell. // * Quantization zero point and scale must be the same across all the // * weights. // * * 6: The recurrent-to-forget weights. // * A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and shape [outputSize, outputSize] specifying recurrent-to-forget // * part of weights for fully-connected layer inside the LSTM cell. // * Quantization zero point and scale must be the same across all the // * weights. // * * 7: The recurrent-to-cell weights. // * A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and shape [outputSize, outputSize] specifying recurrent-to-cell part // * of weights for fully-connected layer inside the LSTM cell. // * Quantization zero point and scale must be the same across all the // * weights. // * * 8: The recurrent-to-output weights. // * A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and shape [outputSize, outputSize] specifying recurrent-to-output // * part of weights for fully-connected layer inside the LSTM cell. // * Quantization zero point and scale must be the same across all the // * weights. // * * 9: The input gate bias. // * A 1-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32} and shape // * [outputSize] specifying the bias for the fully-connected layer // * inside the LSTM cell. Bias is quantized with scale being a product // * of input and weights scales and zeroPoint equal to 0. // * * 10:The forget gate bias. // * A 1-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32} and shape // * [outputSize] specifying the bias for the fully-connected layer // * inside the LSTM cell. Bias is quantized with scale being a product // * of input and weights scales and zeroPoint equal to 0. // * * 11:The cell bias. // * A 1-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32} and shape // * [outputSize] specifying the bias for the fully-connected layer // * inside the LSTM cell. Bias is quantized with scale being a product // * of input and weights scales and zeroPoint equal to 0. // * * 12:The output gate bias. // * A 1-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32} and shape // * [outputSize] specifying the bias for the fully-connected layer // * inside the LSTM cell. Bias is quantized with scale being a product // * of input and weights scales and zeroPoint equal to 0. // * * 13: A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM} // * and shape [numBatches, outputSize] specifying the cell state from the // * previous time step of the LSTM cell. It is quantized using a // * quantization range of [-2^4, 2^4 * 32767/32768] (scale = 2^4 / // * 32768, zeroPoint = 0). // * * 14: A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and shape [numBathes, outputSize] specifying the output of the LSTM // * cell from previous time-step. Tensor is quantized with a fixed // * quantization range of [-1, 127/128] (scale = 1/128, zeroPoint = // * 128). // * // * // * Outputs: // * * 0: A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM} // * and shape [numBatches, outputSize] which contains a cell state from // * the current time step. Tensor is quantized using a quantization // * range of [-2^4, 2^4 * 32767/32768] (scale = 2^4 / 32768, zeroPoint = // * 0). // * * 1: A 2-D tensor of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and shape [numBathes, outputSize] which contains the output value. // * Tensor is quantized with a fixed quantization range of [-1, 127/128] // * (scale = 1/128, zeroPoint = 128). QUANTIZED_16BIT_LSTM = 73, // * // * Draws samples from a multinomial distribution. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Inputs: // * * 0: A 2-D tensor with shape [batches, classes], specifying the // * unnormalized log-probabilities for all classes. // * * 1: A scalar {@link ANEURALNETWORKS_INT32}, specifying the number of // * independent samples to draw for each row slice. // * * 2: A 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor with shape [2], // * specifying seeds used to initialize the random distribution. If both // * provided seeds are 0, both will be randomly generated. // * Outputs: // * * 0: A 2-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor with shape // * [batches, samples], containing the drawn samples. // * // * Available since NNAPI feature level 3. RANDOM_MULTINOMIAL = 74, // * // * Reduces a tensor by computing the "logical and" of elements along given // * dimensions. // * // * If keep_dims is true, the reduced dimensions are // * retained with length 1. Otherwise, the rank of the tensor is reduced by // * 1 for each entry in dimensions. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_BOOL8} // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: An n-D tensor. // * * 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions // * to reduce. Dimension values must be in the range [-n, n). // * * 2: An {@link ANEURALNETWORKS_BOOL} scalar, keep_dims. If true, // * retains reduced dimensions with length 1. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * If all dimensions are reduced and keep_dims is false, the output // * shape is [1]. // * // * Available since NNAPI feature level 3. REDUCE_ALL = 75, // * // * Reduces a tensor by computing the "logical or" of elements along given // * dimensions. // * // * If keep_dims is true, the reduced dimensions are // * retained with length 1. Otherwise, the rank of the tensor is reduced by // * 1 for each entry in dimensions. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_BOOL8} // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: An n-D tensor. // * * 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions // * to reduce. Dimension values must be in the range [-n, n). // * * 2: An {@link ANEURALNETWORKS_BOOL} scalar, keep_dims. If true, // * retains reduced dimensions with length 1. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * If all dimensions are reduced and keep_dims is false, the output // * shape is [1]. // * // * Available since NNAPI feature level 3. REDUCE_ANY = 76, // * // * Reduces a tensor by computing the maximum of elements along given // * dimensions. // * // * If keep_dims is true, the reduced dimensions are // * retained with length 1. Otherwise, the rank of the tensor is reduced by // * 1 for each entry in dimensions. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: An n-D tensor. // * * 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions // * to reduce. Dimension values must be in the range [-n, n). // * * 2: An {@link ANEURALNETWORKS_BOOL} scalar, keep_dims. If true, // * retains reduced dimensions with length 1. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * If all dimensions are reduced and keep_dims is false, the output // * shape is [1]. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 3. REDUCE_MAX = 77, // * // * Reduces a tensor by computing the minimum of elements along given // * dimensions. // * // * If keep_dims is true, the reduced dimensions are // * retained with length 1. Otherwise, the rank of the tensor is reduced by // * 1 for each entry in dimensions. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: An n-D tensor. // * * 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions // * to reduce. Dimension values must be in the range [-n, n). // * * 2: An {@link ANEURALNETWORKS_BOOL} scalar, keep_dims. If true, // * retains reduced dimensions with length 1. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * If all dimensions are reduced and keep_dims is false, the output // * shape is [1]. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 3. REDUCE_MIN = 78, // * // * Reduces a tensor by multiplying elements along given dimensions. // * // * If keep_dims is true, the reduced dimensions are // * retained with length 1. Otherwise, the rank of the tensor is reduced by // * 1 for each entry in dimensions. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: An n-D tensor. // * * 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions // * to reduce. Dimension values must be in the range [-n, n). // * * 2: An {@link ANEURALNETWORKS_BOOL} scalar, keep_dims. If true, // * retains reduced dimensions with length 1. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * If all dimensions are reduced and keep_dims is false, the output // * shape is [1]. // * // * Available since NNAPI feature level 3. REDUCE_PROD = 79, // * // * Reduces a tensor by summing elements along given dimensions. // * // * If keep_dims is true, the reduced dimensions are // * retained with length 1. Otherwise, the rank of the tensor is reduced by // * 1 for each entry in dimensions. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: up to 4 // * // * Inputs: // * * 0: An n-D tensor. // * * 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions // * to reduce. Dimension values must be in the range [-n, n). // * * 2: An {@link ANEURALNETWORKS_BOOL} scalar, keep_dims. If true, // * retains reduced dimensions with length 1. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. // * If all dimensions are reduced and keep_dims is false, the output // * shape is [1]. // * // * Available since NNAPI feature level 3. REDUCE_SUM = 80, // * // * Select and scale the feature map of each region of interest to a unified // * output size by average pooling sampling points from bilinear interpolation. // * // * The region of interest is represented by its upper-left corner coordinate // * (x1,y1) and lower-right corner coordinate (x2,y2) in the original image. // * A spatial scaling factor is applied to map into feature map coordinate. // * A valid region of interest should satisfy x1 <= x2 and y1 <= y2. // * // * No rounding is applied in this operation. The sampling points are unified // * distributed in the pooling bin and their values are calculated by bilinear // * interpolation. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * // * Inputs: // * * 0: A 4-D tensor, specifying the feature map. // * * 1: A 2-D Tensor of shape [num_rois, 4], specifying the locations of // * the regions of interest, each line with format [x1, y1, x2, y2]. // * For input0 of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}, // * this tensor should be of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, // * with zeroPoint of 0 and scale of 0.125. Zero num_rois is // * supported for this tensor. // * * 2: An 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape // * [num_rois], specifying the batch index of each box. Boxes with // * the same batch index are grouped together. Zero num_rois is // * supported for this tensor. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output // * height of the output tensor. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output // * width of the output tensor. // * * 5: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the ratio // * from the height of original image to the height of feature map. // * * 6: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the ratio // * from the width of original image to the width of feature map. // * * 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the number of // * sampling points in height dimension used to compute the output. // * Set to 0 for adaptive value of ceil(roi_height/out_height). // * * 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the number of // * sampling points in width dimension used to compute the output. // * Set to 0 for adaptive value of ceil(roi_width/out_width). // * * 9: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify // * NCHW data layout for input0 and output0. Set to false for NHWC. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. The output // * shape is [num_rois, out_height, out_width, depth]. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint can be different from the input0 scale and zeroPoint. // * // * Available since NNAPI feature level 3. ROI_ALIGN = 81, // * // * Select and scale the feature map of each region of interest to a unified // * output size by max-pooling. // * // * The region of interest is represented by its upper-left corner coordinate // * (x1,y1) and lower-right corner coordinate (x2,y2) in the original image. // * A spatial scaling factor is applied to map into feature map coordinate. // * A valid region of interest should satisfy x1 <= x2 and y1 <= y2. // * // * Rounding is applied in this operation to ensure integer boundary for // * regions of interest and pooling bins. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * // * Inputs: // * * 0: A 4-D tensor, specifying the feature map. // * * 1: A 2-D Tensor of shape [num_rois, 4], specifying the locations of // * the regions of interest, each line with format [x1, y1, x2, y2]. // * For input0 of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * this tensor should be of {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM}, // * with zeroPoint of 0 and scale of 0.125. // * * 2: An 1-D {@link ANEURALNETWORKS_TENSOR_INT32} tensor, of shape // * [num_rois], specifying the batch index of each box. Boxes with // * the same batch index are grouped together. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output // * height of the output tensor. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output // * width of the output tensor. // * * 5: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the ratio // * from the height of original image to the height of feature map. // * * 6: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the ratio // * from the width of original image to the width of feature map. // * * 7: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify // * NCHW data layout for input0 and output0. Set to false for NHWC. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. The output // * shape is [num_rois, out_height, out_width, depth]. // * For input0 of type {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 3. ROI_POOLING = 82, // * // * Computes reciprocal of square root of x element-wise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} (since NNAPI feature level 7) // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 7) // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: A tensor. // * // * Outputs: // * * 0: The output tensor of same shape as input0. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint can be different from inputs' scale and zeroPoint. // * // * Available since NNAPI feature level 3. RSQRT = 83, // * // * Using a tensor of booleans c and input tensors x and y select values // * elementwise from both input tensors: // * // * O[i] = C[i] ? x[i] : y[i]. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * Inputs: // * * 0: A tensor of type {@link ANEURALNETWORKS_TENSOR_BOOL8} acting as a // * mask that chooses, based on the value at each element, whether the // * corresponding element in the output should be taken from input1 (if // * true) or input2 (if false). // * * 1: An input tensor of the same shape as input0. // * * 2: An input tensor of the same shape and type as input1. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scales and zeroPoint can be different from input1 scale and zeroPoint. // * // * Outputs: // * * 0: A tensor of the same type and shape as input1 and input2. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} tensor, // * the scale and zeroPoint can be different from inputs' scale and zeroPoint. // * // * Available since NNAPI feature level 3. SELECT = 84, // * // * Computes sin of x element-wise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: A tensor. // * // * Outputs: // * * 0: The output tensor of same shape as input0. // * // * Available since NNAPI feature level 3. SIN = 85, // * // * Extracts a slice of specified size from the input tensor starting at a // * specified location. // * // * The starting location is specified as a 1-D tensor containing offsets // * for each dimension. The size is specified as a 1-D tensor containing // * either size of a slice along corresponding dimension or -1. In the latter // * case, all the remaining elements in dimension are included in the slice. // * // * A sum of begin offset and a size of a slice must not exceed size of a // * corresponding dimension. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * Inputs: // * * 0: An n-D tensor to take slice from, may be zero-sized. // * * 1: A 1-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32} specifying // * the beginning indices of the slice in each dimension. // * * 2: A 1-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32} specifying // * the size of the slice in each dimension. // * // * Outputs: // * * 0: An n-D tensor of the same type as the input containing the slice. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * its scale and zeroPoint has to be same as the input0 scale and zeroPoint. // * // * Available since NNAPI feature level 3. SLICE = 86, // * // * Splits a tensor along a given axis into num_splits subtensors. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * Inputs: // * * 0: An n-D tensor to split. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar specifying the axis along // * which to split. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar indicating the number of // * splits along given axis. Must evenly divide axis size. // * // * Outputs: // * * 0 ~ (num_splits - 1): Resulting subtensors. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 3. SPLIT = 87, // * // * Computes square root of x element-wise. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: A tensor. // * // * Outputs: // * * 0: The output tensor of same shape as input0. // * // * Available since NNAPI feature level 3. SQRT = 88, // * // * Constructs a tensor by tiling a given tensor. // * // * This operation creates a new tensor by replicating `input` `multiples` // * times. The output tensor's i-th dimension has `input.dims(i) * multiples[i]` // * elements, and the values of `input` are replicated `multiples[i]` times // * along the i-th dimension. // * For example, tiling `[a b c d]` by `[2]` produces `[a b c d a b c d]`. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * Inputs: // * * 0: input, an n-D tensor specifying the input. // * * 1: multiples, a 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. // * The length of multiples must be n. // * // * Outputs: // * * 0: A tiled tensor of the same {@link OperandCode} and rank as `input`. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 3. TILE = 89, // * // * Finds values and indices of the k largest entries for the last dimension. // * // * Resulting values in each dimensions are sorted in descending order. If // * two values are equal, the one with larger index appears first. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: from 1 // * // * Inputs: // * * 0: input, an n-D tensor specifying the input. // * * 1: k, an {@link ANEURALNETWORKS_INT32} scalar, specifying the number of // * top elements to look for along the last dimension. // * // * Outputs: // * * 0: An n-D tensor of the same type as the input, containing the k // * largest elements along each last dimensional slice. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * * 1: An n-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32} // * containing the indices of values within the last dimension of input. // * // * Available since NNAPI feature level 3. TOPK_V2 = 90, // * // * Performs the transpose of 2-D convolution operation. // * // * This operation is sometimes called "deconvolution" after Deconvolutional // * Networks, but is actually the transpose (gradient) of // * {@link ANEURALNETWORKS_CONV_2D} rather than an actual deconvolution. // * // * The output dimensions are functions of the filter dimensions, stride, and // * padding. // * // * Supported tensor {@link OperandCode} configurations: // * * 16 bit floating point: // * * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} for input, filter, output, and bias. // * // * * 32 bit floating point: // * * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} for input, filter, output, and bias. // * // * * Quantized: // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, filter, and output. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to // * * * input.scale * filter.scale). // * // * * Quantized with symmetric per channel quantization for the filter: // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} for input, and output. // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0, // * * * each value scaling is separate and equal to input.scale * filter.scales[channel]). // * // * Available since NNAPI feature level 4: // * * Quantized signed (since NNAPI feature level 4): // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, filter, and output. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (with scale set to // * * * input.scale * filter.scale). // * // * * Quantized signed with filter symmetric per channel quantization // * (since NNAPI feature level 4): // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} for input, and output. // * * * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} for filter. // * * * {@link ANEURALNETWORKS_TENSOR_INT32} for bias (scale set to 0.0, // * * * each value scaling is separate and equal to input.scale * filter.scales[channel]). // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * // * Both explicit padding and implicit padding are supported. // * // * Inputs (explicit padding): // * * 0: A 4-D tensor, of shape [batches, height, width, depth_in], // * specifying the input. // * Since API level 29, zero batches is supported for this tensor. // * * 1: A 4-D tensor, of shape // * [depth_out, filter_height, filter_width, depth_in], specifying the // * filter. For tensor of type // * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} the channel // * dimension (ANeuralNetworksSymmPerChannelQuantParams::channelDim) must be set to 0. // * * 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input // * tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32} or // * {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the bias must be of the // * same type. // * For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, // * with zeroPoint of 0 and bias_scale == input_scale * filter_scale. // * For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}, // * the bias must be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 // * and bias_scale of 0. The actual scale of each value 'i' is equal to // * bias_scale[i] = input_scale * filter_scale[i]. // * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the left, in the ‘width’ dimension. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the right, in the ‘width’ dimension. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the top, in the ‘height’ dimension. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on // * the bottom, in the ‘height’ dimension. // * * 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘width’ dimension. // * * 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘height’ dimension. // * * 9: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * * 10: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify // * NCHW data layout for input0 and output0. Set to false for NHWC. // * // * Inputs (implicit padding): // * * 0: A 4-D tensor, of shape [batches, height, width, depth_in], // * specifying the input. // * Since API level 29, zero batches is supported for this tensor. // * * 1: A 4-D tensor, of shape // * [depth_out, filter_height, filter_width, depth_in], specifying the // * filter. For tensor of type // * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} the channel // * dimension (ANeuralNetworksSymmPerChannelQuantParams::channelDim) must be set to 0. // * * 2: A 1-D tensor, of shape [depth_out], specifying the bias. For input // * tensor of type {@link ANEURALNETWORKS_TENSOR_FLOAT32} or // * {@link ANEURALNETWORKS_TENSOR_FLOAT16}, the bias should be of the // * same type. // * For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * and {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED}, // * the bias should be of {@link ANEURALNETWORKS_TENSOR_INT32}, // * with zeroPoint of 0 and bias_scale == input_scale * filter_scale. // * For filter tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}, // * the bias must be of {@link ANEURALNETWORKS_TENSOR_INT32}, with zeroPoint of 0 // * and bias_scale of 0. The actual scale of each value 'i' is equal to // * bias_scale[i] = input_scale * filter_scale[i]. // * * 3: An {@link ANEURALNETWORKS_TENSOR_INT32} tensor, specifying the output // * tensor shape. // * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit // * padding scheme, has to be one of the // * {@link PaddingCode} values. // * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘width’ dimension. // * * 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when // * walking through input in the ‘height’ dimension. // * * 7: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the // * {@link FuseCode} values. Specifies the activation to // * invoke on the result. // * * 8: An {@link ANEURALNETWORKS_BOOL} scalar, set to true to specify // * NCHW data layout for input0 and output0. Set to false for NHWC. // * // * Outputs: // * * 0: The output 4-D tensor, of shape // * [batches, out_height, out_width, depth_out]. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint can be different from inputs' scale and zeroPoint. // * // * Available since NNAPI feature level 3. TRANSPOSE_CONV_2D = 91, // * // * A recurrent neural network specified by an LSTM cell. // * // * Performs (fully) dynamic unrolling of input. // * // * This Op unrolls the input along the time dimension, and implements the // * following operation for each element in the sequence // * s = 1...sequence_length: // * outputs[s] = projection(state = activation(LSTMOp(inputs[s]))) // * // * Where LSTMOp is the LSTM op as in {@link ANEURALNETWORKS_LSTM}, // * the "projection" is an optional projection layer from state and output // * and the “activation” is the function passed as the // * “fused_activation_function” argument (if not “NONE”). // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: 3, either time-major or batch-major. // * // * All input and output tensors must be of the same type. // * // * Inputs: // * * 0: The input (\f$x_t\f$). // * A 3-D tensor of shape: // * If time-major: [max_time, batch_size, input_size] // * If batch-major: [batch_size, max_time, input_size] // * where “max_time” is the number of timesteps (sequence length), // * “batch_size” corresponds to the batching dimension, and // * “input_size” is the size of the input. // * * 1: The input-to-input weights (\f$W_{xi}\f$). Optional. // * A 2-D tensor of shape [num_units, input_size], where “num_units” // * corresponds to the number of cell units. // * * 2: The input-to-forget weights (\f$W_{xf}\f$). // * A 2-D tensor of shape [num_units, input_size]. // * * 3: The input-to-cell weights (\f$W_{xc}\f$). // * A 2-D tensor of shape [num_units, input_size]. // * * 4: The input-to-output weights (\f$W_{xo}\f$). // * A 2-D tensor of shape [num_units, input_size]. // * * 5: The recurrent-to-input weights (\f$W_{hi}\f$). Optional. // * A 2-D tensor of shape [num_units, output_size], where “output_size” // * corresponds to either the number of cell units (i.e., “num_units”), // * or the second dimension of the “projection_weights”, if defined. // * * 6: The recurrent-to-forget weights (\f$W_{hf}\f$). // * A 2-D tensor of shape [num_units, output_size]. // * * 7: The recurrent-to-cell weights (\f$W_{hc}\f$). // * A 2-D tensor of shape [num_units, output_size]. // * * 8: The recurrent-to-output weights (\f$W_{ho}\f$). // * A 2-D tensor of shape [num_units, output_size]. // * * 9: The cell-to-input weights (\f$W_{ci}\f$). Optional. // * A 1-D tensor of shape [num_units]. // * * 10:The cell-to-forget weights (\f$W_{cf}\f$). Optional. // * A 1-D tensor of shape [num_units]. // * * 11:The cell-to-output weights (\f$W_{co}\f$). Optional. // * A 1-D tensor of shape [num_units]. // * * 12:The input gate bias (\f$b_i\f$). Optional. // * A 1-D tensor of shape [num_units]. // * * 13:The forget gate bias (\f$b_f\f$). // * A 1-D tensor of shape [num_units]. // * * 14:The cell bias (\f$b_c\f$). // * A 1-D tensor of shape [num_units]. // * * 15:The output gate bias (\f$b_o\f$). // * A 1-D tensor of shape [num_units]. // * * 16:The projection weights (\f$W_{proj}\f$). Optional. // * A 2-D tensor of shape [output_size, num_units]. // * * 17:The projection bias (\f$b_{proj}\f$). Optional. // * A 1-D tensor of shape [output_size]. // * * 18:The output state (in) (\f$h_{t-1}\f$). // * A 2-D tensor of shape [batch_size, output_size]. // * * 19:The cell state (in) (\f$C_{t-1}\f$). // * A 2-D tensor of shape [batch_size, num_units]. // * * 20:The activation function (\f$g\f$). // * A value indicating the activation function: // * <ul> // * <li>0: None; // * <li>1: Relu; // * <li>3: Relu6; // * <li>4: Tanh; // * <li>6: Sigmoid. // * </ul> // * * 21:The clipping threshold (\f$t_{cell}\f$) for the cell state, such // * that values are bound within [-cell_clip, cell_clip]. If set to 0.0 // * then clipping is disabled. // * * 22:The clipping threshold (\f$t_{proj}\f$) for the output from the // * projection layer, such that values are bound within // * [-proj_clip, proj_clip]. If set to 0.0 then clipping is disabled. // * * 23:Time-major if true, batch-major if false. // * * 24:The input layer normalization weights. Optional. // * A 1-D tensor of shape [num_units]. Used to rescale normalized inputs // * to activation at input gate. // * * 25:The forget layer normalization weights. Optional. // * A 1-D tensor of shape [num_units]. Used to rescale normalized inputs // * to activation at forget gate. // * * 26:The cell layer normalization weights. Optional. // * A 1-D tensor of shape [num_units]. Used to rescale normalized inputs // * to activation at cell gate. // * * 27:The output layer normalization weights. Optional. // * A 1-D tensor of shape [num_units]. Used to rescale normalized inputs // * to activation at output gate. // * // * Outputs: // * * 0: The output (\f$o_t\f$). // * A 3-D tensor of shape: // * If time-major: [max_time, batch_size, output_size] // * If batch-major: [batch_size, max_time, output_size] // * * 1: A tensor of shape [batch_size, output_size] containing a hidden // * state from the last time step in the sequence. This output is // * optional and can be omitted. If this output is present then // * output #2 must be present as well. // * Available since NNAPI feature level 4. // * * 2: A tensor of shape [batch_size, cell_size] containing a cell state // * from the last time step in the sequence. This output is optional // * and can be omitted. // * Available since NNAPI feature level 4. // * // * Available since NNAPI feature level 3. // * // * Important: As of NNAPI feature level 3, there is no way to get the output state tensors out // * and NNAPI does not maintain internal states. This operator does not support the usage pattern // * in which multiple cells are chained and state tensors are propagated. UNIDIRECTIONAL_SEQUENCE_LSTM = 92, // * // * A recurrent neural network layer that applies a basic RNN cell to a // * sequence of inputs. // * // * This layer unrolls the input along the sequence dimension, and implements // * the following operation // * for each element in the sequence s = 1...sequence_length: // * outputs[s] = state = activation(inputs[s] * input_weights’ + state * // * recurrent_weights’ + bias) // * // * Where: // * * “input_weights” is a weight matrix that multiplies the inputs; // * * “recurrent_weights” is a weight matrix that multiplies the current // * “state” which itself is the output from the previous time step // * computation; // * * “bias” is a bias vector (added to each output vector in the batch); // * * “activation” is the function passed as the “fused_activation_function” // * argument (if not “NONE”). // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * The input tensors must all be the same type. // * // * Inputs: // * * 0: input. // * A 3-D tensor. The shape is defined by the input 6 (timeMajor). If // * it is set to 1, then the input has a shape [maxTime, batchSize, // * inputSize], otherwise the input has a shape [batchSize, maxTime, // * inputSize]. // * * 1: weights. // * A 2-D tensor of shape [numUnits, inputSize]. // * * 2: recurrent_weights. // * A 2-D tensor of shape [numUnits, numUnits]. // * * 3: bias. // * A 1-D tensor of shape [numUnits]. // * * 4: hidden state // * A 2-D tensor of shape [batchSize, numUnits]. Specifies a hidden // * state input for the first time step of the computation. // * * 5: fusedActivationFunction. // * A {@link FuseCode} value indicating the activation function. If // * “NONE” is specified then it results in a linear activation. // * * 6: timeMajor // * An {@link ANEURALNETWORKS_INT32} scalar specifying the shape format // * of input and output tensors. Must be set to either 0 or 1. // * Outputs: // * * 0: output. // * A 3-D tensor. The shape is defined by the input 6 (timeMajor). If // * it is set to 1, then the output has a shape [maxTime, batchSize, // * numUnits], otherwise the output has a shape [batchSize, maxTime, // * numUnits]. // * * 1: A tensor of shape [batchSize, numUnits] containing hidden state // * from the last time step in the sequence. This output is optional // * and can be omitted. // * Available since NNAPI feature level 4. // * // * Available since NNAPI feature level 3. // * // * Important: As of NNAPI feature level 3, there is no way to get the output state tensors out // * and NNAPI does not maintain internal states. This operator does not support the usage pattern // * in which multiple cells are chained and state tensors are propagated. UNIDIRECTIONAL_SEQUENCE_RNN = 93, // * // * Resizes images to given size using the nearest neighbor interpretation. // * // * Resized images must be distorted if their output aspect ratio is not the // * same as input aspect ratio. The corner pixels of output may not be the // * same as corner pixels of input. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} (since NNAPI feature level 4) // * // * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. // * With the default data layout NHWC, the data is stored in the order of: // * [batch, height, width, channels]. Alternatively, the data layout could // * be NCHW, the data storage order of: [batch, channels, height, width]. // * // * Both resizing by shape and resizing by scale are supported. // * // * Inputs (resizing by shape): // * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying // * the input. Zero batches is supported for this tensor. // * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output // * width of the output tensor. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output // * height of the output tensor. // * * 3: An {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * * 4: Align corners. An optional {@link ANEURALNETWORKS_BOOL} // * scalar, default to false. If True, the centers of the 4 corner // * pixels of the input and output tensors are aligned, preserving the // * values at the corner pixels. // * Available since NNAPI feature level 4. // * * 5: Half pixel centers. An optional {@link ANEURALNETWORKS_BOOL} // * scalar, default to false. If True, the pixel centers are assumed to // * be at (0.5, 0.5). This is the default behavior of image.resize in // * TF 2.0. If this parameter is True, then align_corners parameter // * must be False. // * Available since NNAPI feature level 4. // * // * Inputs (resizing by scale): // * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying // * the input. Zero batches is supported for this tensor. // * * 1: A scalar, specifying width_scale, the scaling factor of the width // * dimension from the input tensor to the output tensor. The output // * width is calculated as new_width = floor(width * width_scale). // * The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is // * of {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of // * {@link ANEURALNETWORKS_FLOAT32} otherwise. // * * 2: A scalar, specifying height_scale, the scaling factor of the height // * dimension from the input tensor to the output tensor. The output // * height is calculated as new_height = floor(height * height_scale). // * The scalar must be of {@link ANEURALNETWORKS_FLOAT16} if input0 is // * of {@link ANEURALNETWORKS_TENSOR_FLOAT16} and of // * {@link ANEURALNETWORKS_FLOAT32} otherwise. // * * 3: An {@link ANEURALNETWORKS_BOOL} scalar, default to false. // * Set to true to specify NCHW data layout for input0 and output0. // * * 4: Align corners. An optional {@link ANEURALNETWORKS_BOOL} // * scalar, default to false. If True, the centers of the 4 corner // * pixels of the input and output tensors are aligned, preserving the // * values at the corner pixels. // * Available since NNAPI feature level 4. // * * 5: Half pixel centers. An optional {@link ANEURALNETWORKS_BOOL} // * scalar, default to false. If True, the pixel centers are assumed to // * be at (0.5, 0.5). This is the default behavior of image.resize in // * TF 2.0. If this parameter is True, then align_corners parameter // * must be False. // * Available since NNAPI feature level 4. // * // * Outputs: // * * 0: The output 4-D tensor, of shape // * [batches, new_height, new_width, depth]. // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 3. RESIZE_NEAREST_NEIGHBOR = 94, // * // * Quantized version of {@link ANEURALNETWORKS_LSTM}. // * // * The input and the output use asymmetric quantized types, while the rest // * use symmetric ones. // * // * Inputs: // * * 0: The input to the LSTM cell. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} // * Shape: [batchSize, inputSize] // * * 1: The input-to-input weights. Optional. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM} // * Shape: [numUnits, inputSize] // * * 2: The input-to-forget weights. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM} // * Shape: [numUnits, inputSize] // * * 3: The input-to-cell weights. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM} // * Shape: [numUnits, inputSize] // * * 4: The input-to-output weights. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM} // * Shape: [numUnits, inputSize] // * * 5: The recurrent-to-input weights. Optional. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM} // * Shape: [numUnits, outputSize] // * * 6: The recurrent-to-forget weights. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM} // * Shape: [numUnits, outputSize] // * * 7: The recurrent-to-cell weights. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM} // * Shape: [numUnits, outputSize] // * * 8: The recurrent-to-output weights. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM} // * Shape: [numUnits, outputSize] // * * 9: The cell-to-input weights (for peephole). Optional. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM} // * Shape: [numUnits] // * * 10: The cell-to-forget weights (for peephole). Optional. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM} // * Shape: [numUnits] // * * 11: The cell-to-output weights (for peephole). Optional. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM} // * Shape: [numUnits] // * * 12: The input gate bias. Quantized with scale being the // * product of input and weights scales and zeroPoint equal to 0. // * Optional. // * Type: {@link ANEURALNETWORKS_TENSOR_INT32} // * Shape: [numUnits] // * * 13: The forget gate bias. Quantized with scale being the // * product of input and weights scales and zeroPoint equal to 0. // * Type: {@link ANEURALNETWORKS_TENSOR_INT32} // * Shape: [numUnits] // * * 14: The cell bias. Quantized with scale being the // * product of input and weights scales and zeroPoint equal to 0. // * Type: {@link ANEURALNETWORKS_TENSOR_INT32} // * Shape: [numUnits] // * * 15: The output gate bias. Quantized with scale being the // * product of input and weights scales and zeroPoint equal to 0. // * Type: {@link ANEURALNETWORKS_TENSOR_INT32} // * Shape: [numUnits] // * * 16: The projection weights. Optional. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM} // * Shape: [outputSize, numUnits] // * * 17: The projection bias. Quantized with scale being the // * product of input and weights scales and zeroPoint equal to 0. // * Optional. // * Type: {@link ANEURALNETWORKS_TENSOR_INT32} // * Shape: [outputSize] // * * 18: The output from the previous time step. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} // * Shape: [batchSize, outputSize] // * * 19: The cell state from the previous time step. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM} // * Shape: [batchSize, numUnits] // * * 20: The input layer normalization weights. Used to rescale // * normalized inputs to activation at input gate. Optional. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM} // * Shape: [numUnits] // * * 21: The forget layer normalization weights. Used to // * rescale normalized inputs to activation at forget gate. Optional. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM} // * Shape: [numUnits] // * * 22: The cell layer normalization weights. Used to rescale // * normalized inputs to activation at cell gate. Optional. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM} // * Shape: [numUnits] // * * 23: The output layer normalization weights. Used to // * rescale normalized inputs to activation at output gate. Optional. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM} // * Shape: [numUnits] // * * 24: The cell clip. If provided the cell state is clipped // * by this value prior to the cell output activation. Optional. // * Type: {@link ANEURALNETWORKS_FLOAT32}. // * * 25: The projection clip. If provided and projection is enabled, // * this is used for clipping the projected values. Optional. // * Type: {@link ANEURALNETWORKS_FLOAT32}. // * * 26: The scale of the intermediate result of matmul, // * i.e. input to layer normalization, at input gate. // * Type: {@link ANEURALNETWORKS_FLOAT32}. // * * 27: The scale of the intermediate result of matmul, // * i.e. input to layer normalization, at forget gate. // * Type: {@link ANEURALNETWORKS_FLOAT32}. // * * 28: The scale of the intermediate result of matmul, // * i.e. input to layer normalization, at cell gate. // * Type: {@link ANEURALNETWORKS_FLOAT32}. // * * 29: The scale of the intermediate result of matmul, // * i.e. input to layer normalization, at output gate. // * Type: {@link ANEURALNETWORKS_FLOAT32}. // * * 30: The zero point of the hidden state, i.e. input to // * projection. // * Type: {@link ANEURALNETWORKS_INT32}. // * * 31: The scale of the hidden state, i.e. input to // * projection. // * Type: {@link ANEURALNETWORKS_FLOAT32}. // * // * Outputs: // * * 0: The output state (out). // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} // * Shape: [batchSize, outputSize] // * * 1: The cell state (out). // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM} // * Shape: [batchSize, numUnits] // * * 2: The output. This is effectively the same as the current // * "output state (out)" value. // * Type: {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} // * Shape: [batchSize, outputSize] // * // * Available since NNAPI feature level 4. QUANTIZED_LSTM = 95, // * // * Executes one of the two referenced models as determined by a boolean // * value. // * // * The inputs and outputs of the two referenced models must agree with the // * signature of this operation. That is, if the operation has (3 + n) inputs // * and m outputs, both models must have n inputs and m outputs with the same // * types, ranks (if specified), dimensions (if specified), scales, // * zeroPoints, and other operand parameters as the corresponding operation // * inputs and outputs. // * // * Inputs: // * * 0: A value of type {@link ANEURALNETWORKS_TENSOR_BOOL8} and shape [1] // * that determines which of the two referenced models to execute. // * The operand must have fully specified dimensions. // * * 1: A {@link ANEURALNETWORKS_MODEL} reference to the model to be // * executed if the condition is true. // * * 2: A {@link ANEURALNETWORKS_MODEL} reference to the model to be // * executed if the condition is false. // * * 3 ~ (n + 2): Inputs to be passed to the model selected for execution. // * // * Outputs: // * * 0 ~ (m - 1): Outputs produced by the selected model. // * // * Available since NNAPI feature level 4. IF = 96, // * // * Executes the body model until the condition model outputs false. // * // * The inputs to this operation are the condition model, the body model, // * and operand values for the first iteration of the loop. The values are // * implicitly split into three groups of input-output, state-only, and // * input-only values, as described below. // * // * The outputs of this operation are the final values of input-output // * operands. // * // * Both the condition and body model receive (m + k + n) inputs. // * * The first m (m >= 1) inputs are input-output operands. For the first // * iteration, these are initialized from the corresponding inputs of the // * WHILE operation. In subsequent iterations, their values come from the // * corresponding outputs of the body model produced during the previous // * iteration. // * * The next k (k >= 0) inputs are state-only operands. They are similar to // * the input-output operands, except that their values are no longer // * available after the loop terminates. // * * The last n (n >= 0) inputs are input-only operands. Their values come // * from the corresponding inputs of the WHILE operation. // * // * The body model produces (m + k) outputs. // * * The first m outputs are input-output operands. They become the outputs // * of the WHILE operation when a termination condition is reached. // * * The last k outputs are state-only operands. Their values are no longer // * available after the loop terminates. // * // * The numbers m, k, and n are inferred by the runtime as follows: // * m = (WHILE operation output count) // * k = (body model output count) - m // * n = (body model input count) - m - k // * // * The pseudo-code below illustrates the flow of a WHILE operation with // * inputs condition, body, initial_input_output, initial_state, input_only // * (m = 1, k = 1, n = 1): // * // * input_output = initial_input_output // * state = initial_state // * while condition(input_output, state, input_only): // * input_output, state = body(input_output, state, input_only) // * return input_output // * // * To prevent infinite loops, there is an implicit execution timeout // * associated with each loop ("loop timeout duration"). See {@link // * ANeuralNetworksExecution_setLoopTimeout}. // * // * Inputs: // * * 0: A {@link ANEURALNETWORKS_MODEL} reference to the condition // * model. The model must have (m + k + n) inputs with // * the same types, ranks (if specified), dimensions (if specified), // * scales, zeroPoints, and other operand parameters as the // * corresponding inputs of the WHILE operation and exactly one output // * of {@link ANEURALNETWORKS_TENSOR_BOOL8} and shape [1]. // * The output operand must have fully specified dimensions. // * * 1: A {@link ANEURALNETWORKS_MODEL} reference to the body model. // * The model must have (m + k + n) inputs and (m + k) outputs with // * the same types, ranks (if specified), dimensions (if specified), // * scales, zeroPoints, and other operand parameters as the // * corresponding inputs and outputs of the WHILE operation. // * * (m inputs): Initial values for input-output operands. // * * (k inputs): Initial values for state-only operands. // * * (n inputs): Values for input-only operands. // * // * Outputs: // * * 0 ~ (m - 1): Outputs produced by the loop. // * // * Available since NNAPI feature level 4. WHILE = 97, // * // * Computes exponential linear activation on the input tensor element-wise. // * // * The output is calculated using the following formula: // * // * ELU(x) = max(0, x) + min(0, alpha * (exp(x) - 1)) // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: A tensor, specifying the input. May be zero-sized. // * * 1: A scalar, specifying the alpha parameter. // * For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT16}, // * the alpha value must be of {@link ANEURALNETWORKS_FLOAT16}. // * For input tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, // * the alpha value must be of {@link ANEURALNETWORKS_FLOAT32}. // * // * Outputs: // * * 0: The output tensor of same shape and type as input0. // * // * Available since NNAPI feature level 4. ELU = 98, // * // * Computes hard-swish activation on the input tensor element-wise. // * // * Hard swish activation is introduced in // * https://arxiv.org/pdf/1905.02244.pdf // * // * The output is calculated using the following formula: // * // * h-swish(x) = x * max(0, min(6, (x + 3))) / 6 // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: A tensor, specifying the input. May be zero-sized. // * // * Outputs: // * * 0: The output tensor of same shape and type as input0. // * Scale and zero point of this tensor may be different from the input // * tensor's parameters. // * // * Available since NNAPI feature level 4. HARD_SWISH = 99, // * // * Creates a tensor filled with a scalar value. // * // * Supported output tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: A 1-D tensor, specifying the desired output tensor shape. // * * 1: A scalar, specifying the value to fill the output tensors with. // * For output tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT16}, // * the scalar must be of {@link ANEURALNETWORKS_FLOAT16}. // * For output tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, // * the scalar must be of {@link ANEURALNETWORKS_FLOAT32}. // * For output tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, // * the scalar must be of {@link ANEURALNETWORKS_INT32}. // * // * Outputs: // * * 0: The output tensor. // * // * Available since NNAPI feature level 4. FILL = 100, // * // * Returns the rank of a tensor. // * // * The rank of a tensor is the number of dimensions in it. Also known as // * "order", "degree", "ndims". // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT16_SYMM} // * * {@link ANEURALNETWORKS_TENSOR_BOOL8} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} // * * {@link ANEURALNETWORKS_TENSOR_QUANT16_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: The input tensor. // * // * Outputs: // * * 0: A scalar of {@link ANEURALNETWORKS_INT32}, specifying the rank // * of the input tensor. // * // * Available since NNAPI feature level 4. RANK = 101, // * // * Performs multiplication of two tensors in batches. // * // * Multiplies all slices of two input tensors and arranges the individual // * results in a single output tensor of the same batch size. Each pair of // * slices in the same batch have identical {@link OperandCode}. Each // * slice can optionally be adjointed (transpose and conjugate) before // * multiplication. // * // * The two input tensors and the output tensor must be 2-D or higher and // * have the same batch size. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * // * Supported tensor rank: at least 2 and up to 4 // * // * Inputs: // * * 0: A tensor with 2-D or higher shape [..., r_x, c_x]. // * * 1: A tensor with 2-D or higher shape [..., r_y, c_y]. It has the same // * {@link OperandCode} and batch size as input0. // * * 2: An optional {@link ANEURALNETWORKS_BOOL} scalar adj_x, default // * to false. Set to true to adjoint the slices of input0. // * * 3: An optional {@link ANEURALNETWORKS_BOOL} scalar adj_y, default // * to false. Set to true to adjoint the slices of input1. // * // * Outputs: // * * 0: A tensor with 2-D or higher shape [..., r_o, c_o], where // * r_o = c_x if adj_x else r_x // * c_o = r_y if adj_y else c_y // * // * Available since NNAPI feature level 6. BATCH_MATMUL = 102, // * // * Packs N input tensors (N >= 1) of rank R into one output tensor of rank R+1. // * The tensors are packed along a given axis. // * // * The input tensors must have identical {@link OperandCode} and dimensions. // * // * For example, suppose there are N input tensors of shape (A, B, C). // * If axis is 0, the output tensor will have shape (N, A, B, C). // * If axis is 1, the output tensor will have shape (A, N, B, C). // * // * All dimensions through the axis dimension determine the output tile count; // * the remaining dimensions determine the tile shape. // * // * Return to the example of N input tensors of shape (A, B, C). // * If axis is 0, there are N tiles in the output, each of shape (A, B, C). // * If axis is 1, there are A*N tiles in the output, each of shape (B, C). // * // * The coordinates of a tile within the output tensor are (t[0],...,t[axis]). // * The coordinates of a tile within an input tensor are (t[0],...,t[axis-1]). // * (If axis is 0, an input tensor consists of a single tile.) // * If we index input tensors starting with 0 (rather than by operand number), // * then output_tile[t[0],...,t[axis]] = input_tile[t[axis]][t[0],...,t[axis-1]]. // * That is, all output tile coordinates except for the axis coordinate select // * the corresponding location within some input tensor; and the axis coordinate // * selects the input tensor. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * // * Supported input tensor rank: from 1 // * // * Inputs: // * * 0: A scalar of type {@link ANEURALNETWORKS_INT32}, specifying // * the axis along which to pack. The valid range is [0, R+1). // * * 1 ~ N: Input tensors to be packed together. // * For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensors, // * the scales and zeroPoint must be the same for all input tensors, // * and will be the same for the output tensor. // * // * Outputs: // * * 0: The packed tensor. // * // * Available since NNAPI feature level 6. PACK = 103, // * // * Pads a tensor with mirrored values. // * // * This operator specifies one of two padding modes: REFLECT or SYMMETRIC. // * In the case of REFLECT mode, the mirroring excludes the border element // * on the padding side. // * In the case of SYMMETRIC mode, the mirroring includes the border element // * on the padding side. // * // * For example, if the input is the 1-D tensor `[1, 2, 3]` and the padding // * is `[0, 2]` (i.e., pad no elements before the first (and only) dimension, // * and two elements after the first (and only) dimension), then: // * - REFLECT mode produces the output `[1, 2, 3, 2, 1]` // * - SYMMETRIC mode produces the output `[1, 2, 3, 3, 2]` // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * // * Supported tensor rank: from 1. // * // * Inputs: // * * 0: An n-D tensor, specifying the tensor to be padded. // * * 1: A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the paddings // * for each spatial dimension of the input tensor. The shape of the // * tensor must be {rank(input0), 2}. // * padding[i, 0] specifies the number of elements to be padded in the // * front of dimension i. // * padding[i, 1] specifies the number of elements to be padded after the // * end of dimension i. // * Each padding value must be nonnegative. // * In the case of REFLECT mode, each padding value must be less than the // * corresponding dimension. // * In the case of SYMMETRIC mode, each padding value must be less than or // * equal to the corresponding dimension. // * * 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the mode. // * Options are 0:REFLECT and 1:SYMMETRIC. // * // * Outputs: // * * 0: A tensor of the same {@link OperandCode} as input0. The // * output tensor has the same rank as input0, and each // * dimension of the output tensor has the same size as the // * corresponding dimension of the input tensor plus the size // * of the padding: // * output0.dimension[i] = // * padding[i, 0] + input0.dimension[i] + padding[i, 1] // * For a {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensor, // * the scale and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 7. MIRROR_PAD = 104, // * // * Reverses a specified dimension of a tensor. // * // * Supported tensor {@link OperandCode}: // * * {@link ANEURALNETWORKS_TENSOR_FLOAT16} // * * {@link ANEURALNETWORKS_TENSOR_FLOAT32} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} // * * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} // * * {@link ANEURALNETWORKS_TENSOR_INT32} // * // * Supported tensor rank: up to 8. // * // * Inputs: // * * 0: Input tensor of rank n. // * * 1: Axis tensor of type {@link ANEURALNETWORKS_TENSOR_INT32} and shape [1], // * specifying which dimension of the input tensor is to be reversed. The dimension // * must be in the range [0, n). // * // * Outputs: // * * 0: The reversed tensor of the same shape as the input tensor. // * For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} and // * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM_SIGNED} tensors, // * the scales and zeroPoint must be the same as input0. // * // * Available since NNAPI feature level 7. REVERSE = 105, }
* * Operation types. * * The type of an operation in a model. * * Available since NNAPI feature level 1.
Related Procedures With Parameters
PaddingCode ¶
PaddingCode :: enum int { // * // * SAME padding. // * Padding on both ends are the "same": // * padding_to_beginning = total_padding / 2 // * padding_to_end = (total_padding + 1)/2. // * i.e., for even number of padding, padding to both ends are exactly // * the same; for odd number of padding, padding to the ending is bigger // * than the padding to the beginning by 1. // * // * total_padding is a function of input, stride, dilation and filter size. // * It could be computed as follows: // * out_size = (input + stride - 1) / stride // * effective_filter_size = (filter_size - 1) * dilation + 1 // * needed_input = (out_size - 1) * stride + effective_filter_size // * total_padding = max(0, needed_input - input_size) // * The computation is the same for the horizontal and vertical directions. SAME = 1, // * // * VALID padding. // * No padding. When the input size is not evenly divisible by // * the filter size, the input at the end that could not fill // * the whole filter tile will simply be ignored. VALID = 2, }
* * Implicit padding algorithms. * * * Available since NNAPI feature level 1.
PermissionManagerResult ¶
PermissionManagerResult :: enum i32 { // * // * This is returned by APermissionManager_checkPermission() // * if the permission has been granted to the given package. PERMISSION_GRANTED = 0, // * // * This is returned by APermissionManager_checkPermission() // * if the permission has not been granted to the given package. PERMISSION_DENIED = -1, }
* * Permission check results. * * Introduced in API 31.
Related Procedures With Parameters
PermissionManagerStatus ¶
PermissionManagerStatus :: enum i32 { // * // * This is returned if the permission check completed without errors. // * The output result is valid and contains one of {::PERMISSION_MANAGER_PERMISSION_GRANTED, // * ::PERMISSION_MANAGER_PERMISSION_DENIED}. OK = 0, // * // * This is returned if the permission check encountered an unspecified error. // * The output result is unmodified. ERROR_UNKNOWN = -1, // * // * This is returned if the permission check failed because the service is // * unavailable. The output result is unmodified. SERVICE_UNAVAILABLE = -2, }
* * Permission check return status values. * * Introduced in API 31.
Related Procedures With Returns
PreferenceCode ¶
PreferenceCode :: enum i32 { // * // * Prefer executing in a way that minimizes battery drain. // * This is desirable for compilations that will be executed often. PREFER_LOW_POWER = 0, // * // * Prefer returning a single answer as fast as possible, even if this causes // * more power consumption. PREFER_FAST_SINGLE_ANSWER = 1, // * // * Prefer maximizing the throughput of successive frames, for example when // * processing successive frames coming from the camera. PREFER_SUSTAINED_SPEED = 2, }
* * Execution preferences. * * Available since NNAPI feature level 1.
Related Procedures With Parameters
PriorityCode ¶
PriorityCode :: enum i32 { LOW = 90, MEDIUM = 100, HIGH = 110, DEFAULT = 100, }
* * Relative execution priority. * * Available since NNAPI feature level 4.
Related Procedures With Parameters
ResNsendFlagsBits ¶
ResNsendFlagsBits :: enum u32 { // * // * Send a single request to a single resolver and fail on timeout or network errors RESOLV_NO_RETRY = 0, // * // * Don't lookup this request in the cache, and don't cache the result of the lookup. // * This flag implies {@link #ANDROID_RESOLV_NO_CACHE_LOOKUP}. RESOLV_NO_CACHE_STORE = 1, // * // * Don't lookup the request in cache. RESOLV_NO_CACHE_LOOKUP = 2, }
* Possible values of the flags argument to android_res_nsend and android_res_nquery. Values are ORed together.
Seek_Whence ¶
Seek_Whence :: enum i32 { SET = 0, CUR = 1, END = 2, DATA = 3, HOLE = 4, }
Same as sys/linux but this is i32 instead of i16
Related Procedures With Parameters
SensorAdditionalInfo ¶
SensorAdditionalInfo :: enum i32 { // Marks the beginning of additional information frames BEGIN = 0, // Marks the end of additional information frames END = 1, // * // * Estimation of the delay that is not tracked by sensor timestamps. This // * includes delay introduced by sensor front-end filtering, data transport, // * etc. // * float[2]: delay in seconds, standard deviation of estimated value UNTRACKED_DELAY = 65536, // float: Celsius temperature INTERNAL_TEMPERATURE, // * // * First three rows of a homogeneous matrix, which represents calibration to // * a three-element vector raw sensor reading. // * float[12]: 3x4 matrix in row major order VEC3_CALIBRATION, // * // * Location and orientation of sensor element in the device frame: origin is // * the geometric center of the mobile device screen surface; the axis // * definition corresponds to Android sensor definitions. // * float[12]: 3x4 matrix in row major order SENSOR_PLACEMENT, // * // * float[2]: raw sample period in seconds, // * standard deviation of sampling period SAMPLING, }
* * Sensor Additional Info Types. * * Used to populate {@link AAdditionalInfoEvent#type}.
SensorDirectChannelType ¶
SensorDirectChannelType :: enum i32 { // shared memory created by ASharedMemory_create SHARED_MEMORY = 1, // AHardwareBuffer HARDWARE_BUFFER = 2, }
* * Sensor Direct Channel Type.
Related Procedures With Parameters
SensorDirectReportRate ¶
SensorDirectReportRate :: enum i32 { // stopped STOP = 0, // nominal 50Hz NORMAL = 1, // nominal 200Hz FAST = 2, // nominal 800Hz VERY_FAST = 3, }
* * Sensor Direct Report Rates.
Related Procedures With Parameters
Related Procedures With Returns
SensorReportingMode ¶
SensorReportingMode :: enum i32 { INVALID = -1, CONTINUOUS = 0, ON_CHANGE = 1, ONE_SHOT = 2, SPECIAL_TRIGGER = 3, }
* * Sensor Reporting Modes.
Related Procedures With Returns
SensorStatus ¶
SensorStatus :: enum i8 { NO_CONTACT = -1, UNRELIABLE = 0, ACCURACY_LOW = 1, ACCURACY_MEDIUM = 2, ACCURACY_HIGH = 3, }
* * Sensor accuracy measure.
SensorType ¶
SensorType :: enum i32 { // * // * Invalid sensor type. Returned by {@link ASensor_getType} as error value. INVALID = -1, // * // * {@link ASENSOR_TYPE_ACCELEROMETER} // * reporting-mode: continuous // * // * All values are in SI units (m/s^2) and measure the acceleration of the // * device minus the force of gravity. ACCELEROMETER = 1, // * // * {@link ASENSOR_TYPE_MAGNETIC_FIELD} // * reporting-mode: continuous // * // * All values are in micro-Tesla (uT) and measure the geomagnetic // * field in the X, Y and Z axis. MAGNETIC_FIELD = 2, // Deprecated in API level 15 // * This isn't defined in the NDK but the procedures report it as a valid sensor and can work with it. ORIENTATION = 3, // * // * {@link ASENSOR_TYPE_GYROSCOPE} // * reporting-mode: continuous // * // * All values are in radians/second and measure the rate of rotation // * around the X, Y and Z axis. GYROSCOPE = 4, // * // * {@link ASENSOR_TYPE_LIGHT} // * reporting-mode: on-change // * // * The light sensor value is returned in SI lux units. LIGHT = 5, // * // * {@link ASENSOR_TYPE_PRESSURE} // * // * The pressure sensor value is returned in hPa (millibar). PRESSURE = 6, // Deprecated in API level 15. Use AMBIENT_TEMPERATURE instead. // * This isn't defined in the NDK but the procedures report it as a valid sensor and can work with it. TEMPERATURE = 7, // * // * {@link ASENSOR_TYPE_PROXIMITY} // * reporting-mode: on-change // * // * The proximity sensor which turns the screen off and back on during calls is the // * wake-up proximity sensor. Implement wake-up proximity sensor before implementing // * a non wake-up proximity sensor. For the wake-up proximity sensor set the flag // * SENSOR_FLAG_WAKE_UP. // * The value corresponds to the distance to the nearest object in centimeters. PROXIMITY = 8, // * // * {@link ASENSOR_TYPE_GRAVITY} // * // * All values are in SI units (m/s^2) and measure the direction and // * magnitude of gravity. When the device is at rest, the output of // * the gravity sensor should be identical to that of the accelerometer. GRAVITY = 9, // * // * {@link ASENSOR_TYPE_LINEAR_ACCELERATION} // * reporting-mode: continuous // * // * All values are in SI units (m/s^2) and measure the acceleration of the // * device not including the force of gravity. LINEAR_ACCELERATION = 10, // * // * {@link ASENSOR_TYPE_ROTATION_VECTOR} ROTATION_VECTOR = 11, // * // * {@link ASENSOR_TYPE_RELATIVE_HUMIDITY} // * // * The relative humidity sensor value is returned in percent. RELATIVE_HUMIDITY = 12, // * // * {@link ASENSOR_TYPE_AMBIENT_TEMPERATURE} // * // * The ambient temperature sensor value is returned in Celcius. AMBIENT_TEMPERATURE = 13, // * // * {@link ASENSOR_TYPE_MAGNETIC_FIELD_UNCALIBRATED} MAGNETIC_FIELD_UNCALIBRATED = 14, // * // * {@link ASENSOR_TYPE_GAME_ROTATION_VECTOR} GAME_ROTATION_VECTOR = 15, // * // * {@link ASENSOR_TYPE_GYROSCOPE_UNCALIBRATED} GYROSCOPE_UNCALIBRATED = 16, // * // * {@link ASENSOR_TYPE_SIGNIFICANT_MOTION} SIGNIFICANT_MOTION = 17, // * // * {@link ASENSOR_TYPE_STEP_DETECTOR} STEP_DETECTOR = 18, // * // * {@link ASENSOR_TYPE_STEP_COUNTER} STEP_COUNTER = 19, // * // * {@link ASENSOR_TYPE_GEOMAGNETIC_ROTATION_VECTOR} GEOMAGNETIC_ROTATION_VECTOR = 20, // * // * {@link ASENSOR_TYPE_HEART_RATE} HEART_RATE = 21, // * // * {@link ASENSOR_TYPE_POSE_6DOF} POSE_6DOF = 28, // * // * {@link ASENSOR_TYPE_STATIONARY_DETECT} STATIONARY_DETECT = 29, // * // * {@link ASENSOR_TYPE_MOTION_DETECT} MOTION_DETECT = 30, // * // * {@link ASENSOR_TYPE_HEART_BEAT} HEART_BEAT = 31, // * // * A constant describing a dynamic sensor meta event sensor. // * // * A sensor event of this type is received when a dynamic sensor is added to or removed from // * the system. This sensor type should always use special trigger report mode. DYNAMIC_SENSOR_META = 32, // * // * This sensor type is for delivering additional sensor information aside // * from sensor event data. // * // * Additional information may include: // * - {@link ASENSOR_ADDITIONAL_INFO_INTERNAL_TEMPERATURE} // * - {@link ASENSOR_ADDITIONAL_INFO_SAMPLING} // * - {@link ASENSOR_ADDITIONAL_INFO_SENSOR_PLACEMENT} // * - {@link ASENSOR_ADDITIONAL_INFO_UNTRACKED_DELAY} // * - {@link ASENSOR_ADDITIONAL_INFO_VEC3_CALIBRATION} // * // * This type will never bind to a sensor. In other words, no sensor in the // * sensor list can have the type {@link ASENSOR_TYPE_ADDITIONAL_INFO}. // * // * If a device supports the sensor additional information feature, it will // * report additional information events via {@link ASensorEvent} and will // * have the type of {@link ASensorEvent} set to // * {@link ASENSOR_TYPE_ADDITIONAL_INFO} and the sensor of {@link ASensorEvent} set // * to the handle of the reporting sensor. // * // * Additional information reports consist of multiple frames ordered by // * {@link ASensorEvent#timestamp}. The first frame in the report will have // * a {@link AAdditionalInfoEvent#type} of // * {@link ASENSOR_ADDITIONAL_INFO_BEGIN}, and the last frame in the report // * will have a {@link AAdditionalInfoEvent#type} of // * {@link ASENSOR_ADDITIONAL_INFO_END}. // * ADDITIONAL_INFO = 33, // * // * {@link ASENSOR_TYPE_LOW_LATENCY_OFFBODY_DETECT} LOW_LATENCY_OFFBODY_DETECT = 34, // * // * {@link ASENSOR_TYPE_ACCELEROMETER_UNCALIBRATED} ACCELEROMETER_UNCALIBRATED = 35, // * // * {@link ASENSOR_TYPE_HINGE_ANGLE} // * reporting-mode: on-change // * // * The hinge angle sensor value is returned in degrees. HINGE_ANGLE = 36, // * // * {@link ASENSOR_TYPE_HEAD_TRACKER} // * reporting-mode: continuous // * // * Measures the orientation and rotational velocity of a user's head. Only for internal use // * within the Android system. HEAD_TRACKER = 37, // * // * {@link ASENSOR_TYPE_ACCELEROMETER_LIMITED_AXES} // * reporting-mode: continuous // * // * The first three values are in SI units (m/s^2) and measure the acceleration of the device // * minus the force of gravity. The last three values indicate which acceleration axes are // * supported. A value of 1.0 means supported and a value of 0 means not supported. ACCELEROMETER_LIMITED_AXES = 38, // * // * {@link ASENSOR_TYPE_GYROSCOPE_LIMITED_AXES} // * reporting-mode: continuous // * // * The first three values are in radians/second and measure the rate of rotation around the X, // * Y and Z axis. The last three values indicate which rotation axes are supported. A value of // * 1.0 means supported and a value of 0 means not supported. GYROSCOPE_LIMITED_AXES = 39, // * // * {@link ASENSOR_TYPE_ACCELEROMETER_LIMITED_AXES_UNCALIBRATED} // * reporting-mode: continuous // * // * The first three values are in SI units (m/s^2) and measure the acceleration of the device // * minus the force of gravity. The middle three values represent the estimated bias for each // * axis. The last three values indicate which acceleration axes are supported. A value of 1.0 // * means supported and a value of 0 means not supported. ACCELEROMETER_LIMITED_AXES_UNCALIBRATED = 40, // * // * {@link ASENSOR_TYPE_GYROSCOPE_LIMITED_AXES_UNCALIBRATED} // * reporting-mode: continuous // * // * The first three values are in radians/second and measure the rate of rotation around the X, // * Y and Z axis. The middle three values represent the estimated drift around each axis in // * rad/s. The last three values indicate which rotation axes are supported. A value of 1.0 means // * supported and a value of 0 means not supported. GYROSCOPE_LIMITED_AXES_UNCALIBRATED = 41, // * // * {@link ASENSOR_TYPE_HEADING} // * reporting-mode: continuous // * // * A heading sensor measures the direction in which the device is pointing // * relative to true north in degrees. HEADING = 42, }
* * Sensor types. * * See * [android.hardware.SensorEvent#values](https://developer.android.com/reference/android/hardware/SensorEvent.html#values) * for detailed explanations of the data returned for each of these types.
Related Procedures With Parameters
Related Procedures With Returns
ShowSoftInputFlags ¶
ShowSoftInputFlags :: enum u32 { // * // * Implicit request to show the input window, not as the result // * of a direct request by the user. IMPLICIT = 1, // * // * The user has forced the input method open (such as by // * long-pressing menu) so it should not be closed until they // * explicitly do so. FORCED = 2, }
* Flags for ANativeActivity_showSoftInput see the Java InputMethodManager API for documentation.
Related Procedures With Parameters
SurfaceTransactionTransparency ¶
SurfaceTransactionTransparency :: enum i8 { TRANSPARENT = 0, TRANSLUCENT = 1, OPAQUE = 2, }
* Parameter for ASurfaceTransaction_setBufferTransparency().
Related Procedures With Parameters
SurfaceTransactionVisibility ¶
SurfaceTransactionVisibility :: enum i8 { HIDE = 0, SHOW = 1, }
* Parameter for ASurfaceTransaction_setVisibility().
Related Procedures With Parameters
ToolType ¶
ToolType :: enum i32 { UNKNOWN = 0, FINGER = 1, STYLUS = 2, MOUSE = 3, ERASER = 4, PALM = 5, }
* * Constants that identify tool types. * Refer to the documentation on the MotionEvent class for descriptions of each tool type.
Related Procedures With Returns
WindowFlagsBits ¶
WindowFlagsBits :: enum u32 { // * // * As long as this window is visible to the user, allow the lock // * screen to activate while the screen is on. This can be used // * independently, or in combination with {@link // * KEEP_SCREEN_ON} and/or {@link // * SHOW_WHEN_LOCKED} ALLOW_LOCK_WHILE_SCREEN_ON = 0, // Everything behind this window will be dimmed. DIM_BEHIND = 1, // * // * Blur everything behind this window. // * @deprecated Blurring is no longer supported. BLUR_BEHIND = 2, // * // * This window won't ever get key input focus, so the // * user can not send key or other button events to it. Those will // * instead go to whatever focusable window is behind it. This flag // * will also enable {@link NOT_TOUCH_MODAL} whether or not that // * is explicitly set. // * // * Setting this flag also implies that the window will not need to // * interact with // * a soft input method, so it will be Z-ordered and positioned // * independently of any active input method (typically this means it // * gets Z-ordered on top of the input method, so it can use the full // * screen for its content and cover the input method if needed. You // * can use {@link ALT_FOCUSABLE_IM} to modify this behavior. NOT_FOCUSABLE = 3, // this window can never receive touch events. NOT_TOUCHABLE = 4, // * // * Even when this window is focusable (its // * {@link NOT_FOCUSABLE} is not set), allow any pointer events // * outside of the window to be sent to the windows behind it. Otherwise // * it will consume all pointer events itself, regardless of whether they // * are inside of the window. NOT_TOUCH_MODAL = 5, // * // * When set, if the device is asleep when the touch // * screen is pressed, you will receive this first touch event. Usually // * the first touch event is consumed by the system since the user can // * not see what they are pressing on. // * // * @deprecated This flag has no effect. TOUCHABLE_WHEN_WAKING = 6, // * // * As long as this window is visible to the user, keep // * the device's screen turned on and bright. KEEP_SCREEN_ON = 7, // * // * Place the window within the entire screen, ignoring // * decorations around the border (such as the status bar). The // * window must correctly position its contents to take the screen // * decoration into account. LAYOUT_IN_SCREEN = 8, // allow window to extend outside of the screen. LAYOUT_NO_LIMITS = 9, // * // * Hide all screen decorations (such as the status // * bar) while this window is displayed. This allows the window to // * use the entire display space for itself -- the status bar will // * be hidden when an app window with this flag set is on the top // * layer. A fullscreen window will ignore a value of // * <a href="/reference/android/view/WindowManager.LayoutParams#SOFT_INPUT_ADJUST_RESIZE"> // * SOFT_INPUT_ADJUST_RESIZE</a>; the window will stay // * fullscreen and will not resize. FULLSCREEN = 10, // * // * Override {@link FULLSCREEN} and force the // * screen decorations (such as the status bar) to be shown. FORCE_NOT_FULLSCREEN = 11, // * // * Turn on dithering when compositing this window to // * the screen. // * @deprecated This flag is no longer used. DITHER = 12, // * // * Treat the content of the window as secure, preventing // * it from appearing in screenshots or from being viewed on non-secure // * displays. SECURE = 13, // * // * A special mode where the layout parameters are used // * to perform scaling of the surface when it is composited to the // * screen. SCALED = 14, // * // * Intended for windows that will often be used when the user is // * holding the screen against their face, it will aggressively // * filter the event stream to prevent unintended presses in this // * situation that may not be desired for a particular window, when // * such an event stream is detected, the application will receive // * a {@link AMOTION_EVENT_ACTION_CANCEL} to indicate this so // * applications can handle this accordingly by taking no action on // * the event until the finger is released. IGNORE_CHEEK_PRESSES = 15, // * // * A special option only for use in combination with // * {@link LAYOUT_IN_SCREEN}. When requesting layout in the // * screen your window may appear on top of or behind screen decorations // * such as the status bar. By also including this flag, the window // * manager will report the inset rectangle needed to ensure your // * content is not covered by screen decorations. LAYOUT_INSET_DECOR = 16, // * // * Invert the state of {@link NOT_FOCUSABLE} with // * respect to how this window interacts with the current method. // * That is, if FLAG_NOT_FOCUSABLE is set and this flag is set, // * then the window will behave as if it needs to interact with the // * input method and thus be placed behind/away from it; if {@link // * NOT_FOCUSABLE} is not set and this flag is set, // * then the window will behave as if it doesn't need to interact // * with the input method and can be placed to use more space and // * cover the input method. ALT_FOCUSABLE_IM = 17, // * // * If you have set {@link NOT_TOUCH_MODAL}, you // * can set this flag to receive a single special MotionEvent with // * the action // * {@link AMOTION_EVENT_ACTION_OUTSIDE} for // * touches that occur outside of your window. Note that you will not // * receive the full down/move/up gesture, only the location of the // * first down as an {@link AMOTION_EVENT_ACTION_OUTSIDE}. WATCH_OUTSIDE_TOUCH = 18, // * // * Special flag to let windows be shown when the screen // * is locked. This will let application windows take precedence over // * key guard or any other lock screens. Can be used with // * {@link KEEP_SCREEN_ON} to turn screen on and display windows // * directly before showing the key guard window. Can be used with // * {@link DISMISS_KEYGUARD} to automatically fully dismisss // * non-secure keyguards. This flag only applies to the top-most // * full-screen window. SHOW_WHEN_LOCKED = 19, // * // * Ask that the system wallpaper be shown behind // * your window. The window surface must be translucent to be able // * to actually see the wallpaper behind it; this flag just ensures // * that the wallpaper surface will be there if this window actually // * has translucent regions. SHOW_WALLPAPER = 20, // * // * When set as a window is being added or made // * visible, once the window has been shown then the system will // * poke the power manager's user activity (as if the user had woken // * up the device) to turn the screen on. TURN_SCREEN_ON = 21, // * // * When set the window will cause the keyguard to // * be dismissed, only if it is not a secure lock keyguard. Because such // * a keyguard is not needed for security, it will never re-appear if // * the user navigates to another window (in contrast to // * {@link SHOW_WHEN_LOCKED}, which will only temporarily // * hide both secure and non-secure keyguards but ensure they reappear // * when the user moves to another UI that doesn't hide them). // * If the keyguard is currently active and is secure (requires an // * unlock pattern) than the user will still need to confirm it before // * seeing this window, unless {@link SHOW_WHEN_LOCKED} has // * also been set. DISMISS_KEYGUARD = 22, }
* * Window flags, as per the Java API at android.view.WindowManager.LayoutParams.
addrinfo ¶
addrinfo :: struct { ai_flags: i32, // AI_PASSIVE, AI_CANONNAME, AI_NUMERICHOST ai_family: i32, // PF_xxx ai_socktype: i32, // SOCK_xxx ai_protocol: i32, // 0 or IPPROTO_xxx for IPv4 and IPv6 ai_addrlen: u32, // length of ai_addr ai_canonname: cstring, // canonical name for hostname ai_addr: ^sockaddr, // binary address ai_next: ^addrinfo, }
Related Procedures With Parameters
android_app ¶
android_app :: struct { // The application can place a pointer to its own state object // here if it likes. userData: rawptr, // Fill this in with the function to process main app commands (APP_CMD_*) onAppCmd: proc "c" (app: ^android_app, cmd: AppCmd), // Fill this in with the function to process input events. At this point // the event has already been pre-dispatched, and it will be finished upon // return. Return 1 if you have handled the event, 0 for any default // dispatching. onInputEvent: proc "c" (app: ^android_app, event: ^AInputEvent) -> i32, // The ANativeActivity object instance that this app is running in. activity: ^ANativeActivity, // The current configuration the app is running in. config: ^AConfiguration, // This is the last instance's saved state, as provided at creation time. // It is NULL if there was no state. You can use this as you need, the // memory will remain around until you call android_app_exec_cmd() for // APP_CMD_RESUME, at which point it will be freed and savedState set to NULL. // These variables should only be changed when processing a APP_CMD_SAVE_STATE, // at which point they will be initialized to NULL and you can malloc your // state and place the information here. In that case the memory will be // freed for you later. savedState: rawptr, savedStateSize: uint, // The ALooper associated with the app's thread. looper: ^ALooper, // When non-NULL, this is the input queue from which the app will // receive user input events. inputQueue: ^AInputQueue, // When non-NULL, this is the window surface that the app can draw in. window: ^ANativeWindow, // Current content rectangle of the window, this is the area where the // window's content should be placed to be seen by the user. contentRect: ARect, // Current state of the app's activity. May be either APP_CMD_START, // APP_CMD_RESUME, APP_CMD_PAUSE, or APP_CMD_STOP, see below. activityState: i32, // This is non-zero when the application's NativeActivity is being // destroyed and waiting for the app thread to complete. // The android_main function must return to its caller if this is non-zero. destroyRequested: i32, mutex: pthread_mutex_t, cond: pthread_cond_t, msgread: i32, msgwrite: i32, thread: pthread_t, cmdPollSource: android_poll_source, inputPollSource: android_poll_source, running: i32, stateSaved: i32, destroyed: i32, redrawNeeded: i32, pendingInputQueue: ^AInputQueue, pendingWindow: ^ANativeWindow, pendingContentRect: ARect, }
Related Procedures With Returns
android_dlextinfo ¶
android_dlextinfo :: struct { // A bitmask of `ANDROID_DLEXT_` enum values. flags: bit_set[DLextFlagsBits; u64], // Used by `ANDROID_DLEXT_RESERVED_ADDRESS` and `ANDROID_DLEXT_RESERVED_ADDRESS_HINT`. reserved_addr: rawptr, // Used by `ANDROID_DLEXT_RESERVED_ADDRESS` and `ANDROID_DLEXT_RESERVED_ADDRESS_HINT`. reserved_size: uint, // Used by `ANDROID_DLEXT_WRITE_RELRO` and `ANDROID_DLEXT_USE_RELRO`. relro_fd: i32, // Used by `ANDROID_DLEXT_USE_LIBRARY_FD`. library_fd: i32, // Used by `ANDROID_DLEXT_USE_LIBRARY_FD_OFFSET` library_fd_offset: i64, // Used by `ANDROID_DLEXT_USE_NAMESPACE`. library_namespace: ^android_namespace_t, }
Used to pass Android-specific arguments to android_dlopen_ext.
Related Procedures With Parameters
android_fdsan_error_level ¶
android_fdsan_error_level :: enum int { // No errors. DISABLED, // Warn once(ish) on error, and then downgrade to ANDROID_FDSAN_ERROR_LEVEL_DISABLED. WARN_ONCE, // Warn always on error. WARN_ALWAYS, // Abort on error. FATAL, }
Related Procedures With Parameters
Related Procedures With Returns
android_fdsan_owner_type ¶
android_fdsan_owner_type :: enum int { // * Generic Java or native owners. // * // * Generic Java objects always use 255 as their type, using identityHashCode // * as the value of the tag, leaving bits 33-56 unset. Native pointers are sign // * extended from 48-bits of virtual address space, and so can have the MSB // * set to 255 as well. Use the value of bits 49-56 to distinguish between // * these cases. GENERIC_00 = 0, GENERIC_FF = 255, // FILE* FILE = 1, // DIR* DIR = 2, // android::base::unique_fd UNIQUE_FD = 3, // sqlite-owned file descriptors SQLITE = 4, // java.io.FileInputStream FILEINPUTSTREAM = 5, // java.io.FileOutputStream FILEOUTPUTSTREAM = 6, // java.io.RandomAccessFile RANDOMACCESSFILE = 7, // android.os.ParcelFileDescriptor PARCELFILEDESCRIPTOR = 8, // ART FdFile ART_FDFILE = 9, // java.net.DatagramSocketImpl DATAGRAMSOCKETIMPL = 10, // java.net.SocketImpl SOCKETIMPL = 11, // libziparchive's ZipArchive ZIPARCHIVE = 12, // native_handle_t NATIVE_HANDLE = 13, // android::Parcel PARCEL = 14, }
* For improved diagnostics, the type of a file descriptors owner can be * encoded in the most significant byte of the owner tag. Values of 0 and 0xff * are ignored, which allows for raw pointers to be used as owner tags without * modification.
Related Procedures With Parameters
android_namespace_t ¶
android_namespace_t :: struct {}
android_poll_source ¶
android_poll_source :: struct { // The identifier of this source. May be LOOPER_ID_MAIN or // LOOPER_ID_INPUT. id: i32, // The android_app this ident is associated with. app: ^android_app, // Function to call to perform the standard processing of data from // this source. process: proc "c" (app: ^android_app, source: ^android_poll_source), }
jboolean ¶
jboolean :: u8
Primitive types that match up with Java equivalents.
Related Procedures With Parameters
jbooleanArray ¶
jbooleanArray :: distinct rawptr
jbyteArray ¶
jbyteArray :: distinct rawptr
jcharArray ¶
jcharArray :: distinct rawptr
jdoubleArray ¶
jdoubleArray :: distinct rawptr
jfieldID ¶
jfieldID :: rawptr
Opaque
Related Procedures With Parameters
- AAsset_read
- AChoreographer_postFrameCallback
- AChoreographer_postFrameCallback64
- AChoreographer_postFrameCallbackDelayed
- AChoreographer_postFrameCallbackDelayed64
- AChoreographer_postVsyncCallback
- AChoreographer_registerRefreshRateCallback
- AChoreographer_unregisterRefreshRateCallback
- AHardwareBuffer_lock
- AHardwareBuffer_lockAndGetInfo
- AImageDecoder_createFromBuffer
- AImageDecoder_decodeImage
- AInputQueue_attachLooper
- ALooper_addFd
- ALooper_pollAll
- ALooper_pollOnce
- ANeuralNetworksExecution_setInput
- ANeuralNetworksExecution_setOutput
- ANeuralNetworksModel_setOperandValue
- ASensorManager_createEventQueue
- AStorageManager_mountObb
- AStorageManager_unmountObb
- ASurfaceTransaction_setBufferWithRelease
- ASurfaceTransaction_setOnCommit
- ASurfaceTransaction_setOnComplete
- AThermal_registerThermalStatusListener
- AThermal_unregisterThermalStatusListener
- AndroidBitmap_compress
- AndroidBitmap_lockPixels
Related Procedures With Returns
jfloat ¶
jfloat :: f32
32-bit IEEE 754
Related Procedures With Parameters
- ANativeWindow_setFrameRate
- ANativeWindow_setFrameRateWithChangeStrategy
- ANeuralNetworksMemoryDesc_addInputRole
- ANeuralNetworksMemoryDesc_addOutputRole
- ASurfaceTransaction_setBufferAlpha
- ASurfaceTransaction_setColor
- ASurfaceTransaction_setDesiredHdrHeadroom
- ASurfaceTransaction_setExtendedRangeBrightness
- ASurfaceTransaction_setFrameRate
- ASurfaceTransaction_setFrameRateWithChangeStrategy
- ASurfaceTransaction_setScale
Related Procedures With Returns
- AFont_getAxisValue
- AMotionEvent_getAxisValue
- AMotionEvent_getHistoricalAxisValue
- AMotionEvent_getHistoricalOrientation
- AMotionEvent_getHistoricalPressure
- AMotionEvent_getHistoricalRawX
- AMotionEvent_getHistoricalRawY
- AMotionEvent_getHistoricalSize
- AMotionEvent_getHistoricalToolMajor
- AMotionEvent_getHistoricalToolMinor
- AMotionEvent_getHistoricalTouchMajor
- AMotionEvent_getHistoricalTouchMinor
- AMotionEvent_getHistoricalX
- AMotionEvent_getHistoricalY
- AMotionEvent_getOrientation
- AMotionEvent_getPressure
- AMotionEvent_getRawX
- AMotionEvent_getRawY
- AMotionEvent_getSize
- AMotionEvent_getToolMajor
- AMotionEvent_getToolMinor
- AMotionEvent_getTouchMajor
- AMotionEvent_getTouchMinor
- AMotionEvent_getX
- AMotionEvent_getXOffset
- AMotionEvent_getXPrecision
- AMotionEvent_getY
- AMotionEvent_getYOffset
- AMotionEvent_getYPrecision
- ASensor_getResolution
- AThermal_getThermalHeadroom
jfloatArray ¶
jfloatArray :: distinct rawptr
jint ¶
jint :: i32
signed 32 bits
Related Procedures With Parameters
- AConfiguration_setDensity
- AConfiguration_setGrammaticalGender
- AConfiguration_setKeyboard
- AConfiguration_setKeysHidden
- AConfiguration_setLayoutDirection
- AConfiguration_setMcc
- AConfiguration_setMnc
- AConfiguration_setNavHidden
- AConfiguration_setNavigation
- AConfiguration_setOrientation
- AConfiguration_setScreenHeightDp
- AConfiguration_setScreenLong
- AConfiguration_setScreenRound
- AConfiguration_setScreenSize
- AConfiguration_setScreenWidthDp
- AConfiguration_setSdkVersion
- AConfiguration_setSmallestScreenWidthDp
- AConfiguration_setTouchscreen
- AConfiguration_setUiModeNight
- AConfiguration_setUiModeType
- AFileDescriptor_setFd
- AHardwareBuffer_lock
- AHardwareBuffer_lockAndGetInfo
- AHardwareBuffer_lockPlanes
- AHardwareBuffer_recvHandleFromUnixSocket
- AHardwareBuffer_sendHandleToUnixSocket
- AHardwareBuffer_unlock
- AImageDecoder_computeSampledSize
- AImageDecoder_createFromFd
- AImageDecoder_setTargetSize
- AInputQueue_attachLooper
- AInputQueue_finishEvent
- ALooper_addFd
- ALooper_pollAll
- ALooper_pollOnce
- ALooper_prepare
- ALooper_removeFd
- ANativeWindow_setBuffersGeometry
- ANeuralNetworksEvent_createFromSyncFenceFd
- ANeuralNetworksEvent_getSyncFenceFd
- ANeuralNetworksExecution_getOutputOperandDimensions
- ANeuralNetworksExecution_getOutputOperandRank
- ANeuralNetworksExecution_setInput
- ANeuralNetworksExecution_setInputFromMemory
- ANeuralNetworksExecution_setOutput
- ANeuralNetworksExecution_setOutputFromMemory
- ANeuralNetworksMemory_createFromFd
- ANeuralNetworksModel_setOperandSymmPerChannelQuantParams
- ANeuralNetworksModel_setOperandValue
- ANeuralNetworksModel_setOperandValueFromMemory
- ANeuralNetworksModel_setOperandValueFromModel
- APerformanceHint_createSession
- APerformanceHint_setThreads
- APermissionManager_checkPermission
- ASensorEventQueue_registerSensor
- ASensorEventQueue_setEventRate
- ASensorManager_configureDirectReport
- ASensorManager_createEventQueue
- ASensorManager_createSharedMemoryDirectChannel
- ASensorManager_destroyDirectChannel
- ASharedMemory_getSize
- ASharedMemory_setProt
- AStorageManager_unmountObb
- ASurfaceTransaction_setBuffer
- ASurfaceTransaction_setBufferWithRelease
- ASurfaceTransaction_setPosition
- ASurfaceTransaction_setZOrder
- AThermal_getThermalHeadroom
- ATrace_beginAsyncSection
- ATrace_endAsyncSection
- AndroidBitmap_compress
- android_dlopen_ext
- android_fdsan_close_with_tag
- android_fdsan_exchange_owner_tag
- android_fdsan_get_owner_tag
- android_res_cancel
- android_res_nquery
- android_res_nresult
- android_setsocknetwork
- android_tag_socket
- android_tag_socket_with_uid
- android_untag_socket
- sync_merge
Related Procedures With Returns
- AAsset_isAllocated
- AAsset_openFileDescriptor
- AAsset_openFileDescriptor64
- AAsset_read
- AConfiguration_diff
- AConfiguration_getDensity
- AConfiguration_getGrammaticalGender
- AConfiguration_getKeyboard
- AConfiguration_getKeysHidden
- AConfiguration_getLayoutDirection
- AConfiguration_getMcc
- AConfiguration_getMnc
- AConfiguration_getNavHidden
- AConfiguration_getNavigation
- AConfiguration_getOrientation
- AConfiguration_getScreenHeightDp
- AConfiguration_getScreenLong
- AConfiguration_getScreenRound
- AConfiguration_getScreenSize
- AConfiguration_getScreenWidthDp
- AConfiguration_getSdkVersion
- AConfiguration_getSmallestScreenWidthDp
- AConfiguration_getTouchscreen
- AConfiguration_getUiModeNight
- AConfiguration_getUiModeType
- AConfiguration_isBetterThan
- AConfiguration_match
- AFileDescriptor_getFd
- AHardwareBuffer_allocate
- AHardwareBuffer_getId
- AHardwareBuffer_isSupported
- AImageDecoderFrameInfo_getBlendOp
- AImageDecoderFrameInfo_getDisposeOp
- AImageDecoderHeaderInfo_getHeight
- AImageDecoderHeaderInfo_getWidth
- AImageDecoder_getRepeatCount
- AInputEvent_getDeviceId
- AInputQueue_getEvent
- AInputQueue_hasEvents
- AInputQueue_preDispatchEvent
- AKeyEvent_getRepeatCount
- AKeyEvent_getScanCode
- AMotionEvent_getButtonState
- AMotionEvent_getPointerId
- ANativeWindow_clearFrameRate
- ANativeWindow_getHeight
- ANativeWindow_getWidth
- ANativeWindow_lock
- ANativeWindow_setBuffersDataSpace
- ANativeWindow_setBuffersTransform
- ANativeWindow_setFrameRate
- ANativeWindow_setFrameRateWithChangeStrategy
- ANativeWindow_unlockAndPost
- AObbInfo_getVersion
- APerformanceHint_reportActualWorkDuration
- APerformanceHint_reportActualWorkDuration2
- APerformanceHint_setPreferPowerEfficiency
- APerformanceHint_updateTargetWorkDuration
- ASensorEventQueue_disableSensor
- ASensorEventQueue_enableSensor
- ASensorEventQueue_hasEvents
- ASensorEventQueue_requestAdditionalInfoEvents
- ASensorManager_createHardwareBufferDirectChannel
- ASensorManager_destroyEventQueue
- ASensorManager_getSensorList
- ASensor_getFifoMaxEventCount
- ASensor_getFifoReservedEventCount
- ASensor_getHandle
- ASensor_getMinDelay
- ASharedMemory_create
- ASharedMemory_dupFromJava
- AStorageManager_isObbMounted
- ASurfaceTexture_attachToGLContext
- ASurfaceTexture_detachFromGLContext
- ASurfaceTexture_updateTexImage
- ASurfaceTransactionStats_getPresentFenceFd
- ASurfaceTransactionStats_getPreviousReleaseFenceFd
- AThermal_getThermalHeadroomThresholds
- AThermal_registerThermalStatusListener
- AThermal_unregisterThermalStatusListener
- android_getaddrinfofornetwork
- android_getprocdns
- android_getprocnetwork
- android_res_nsend
- android_setprocdns
- android_setprocnetwork
jintArray ¶
jintArray :: distinct rawptr
jlong ¶
jlong :: i64
signed 64 bits
Related Procedures With Parameters
- AAsset_openFileDescriptor
- AAsset_openFileDescriptor64
- AAsset_seek
- AAsset_seek64
- AChoreographer_postFrameCallbackDelayed
- APerformanceHint_createSession
- APerformanceHint_reportActualWorkDuration
- APerformanceHint_updateTargetWorkDuration
- ASensorEventQueue_registerSensor
- ASurfaceTransaction_setDesiredPresentTime
- ASurfaceTransaction_setFrameTimeline
- ATrace_setCounter
- AWorkDuration_setActualCpuDurationNanos
- AWorkDuration_setActualGpuDurationNanos
- AWorkDuration_setActualTotalDurationNanos
- AWorkDuration_setWorkPeriodStartTimestampNanos
Related Procedures With Returns
- AAsset_getLength
- AAsset_getLength64
- AAsset_getRemainingLength
- AAsset_getRemainingLength64
- AChoreographerFrameCallbackData_getFrameTimeNanos
- AChoreographerFrameCallbackData_getFrameTimelineDeadlineNanos
- AChoreographerFrameCallbackData_getFrameTimelineExpectedPresentationTimeNanos
- AChoreographerFrameCallbackData_getFrameTimelineVsyncId
- AImageDecoderFrameInfo_getDuration
- AKeyEvent_getDownTime
- AKeyEvent_getEventTime
- AMotionEvent_getDownTime
- AMotionEvent_getEventTime
- AMotionEvent_getHistoricalEventTime
- APerformanceHint_getPreferredUpdateRateNanos
- ASurfaceTexture_getTimestamp
- ASurfaceTransactionStats_getAcquireTime
- ASurfaceTransactionStats_getLatchTime
jlongArray ¶
jlongArray :: distinct rawptr
jmethodID ¶
jmethodID :: rawptr
Related Procedures With Parameters
- AAsset_read
- AChoreographer_postFrameCallback
- AChoreographer_postFrameCallback64
- AChoreographer_postFrameCallbackDelayed
- AChoreographer_postFrameCallbackDelayed64
- AChoreographer_postVsyncCallback
- AChoreographer_registerRefreshRateCallback
- AChoreographer_unregisterRefreshRateCallback
- AHardwareBuffer_lock
- AHardwareBuffer_lockAndGetInfo
- AImageDecoder_createFromBuffer
- AImageDecoder_decodeImage
- AInputQueue_attachLooper
- ALooper_addFd
- ALooper_pollAll
- ALooper_pollOnce
- ANeuralNetworksExecution_setInput
- ANeuralNetworksExecution_setOutput
- ANeuralNetworksModel_setOperandValue
- ASensorManager_createEventQueue
- AStorageManager_mountObb
- AStorageManager_unmountObb
- ASurfaceTransaction_setBufferWithRelease
- ASurfaceTransaction_setOnCommit
- ASurfaceTransaction_setOnComplete
- AThermal_registerThermalStatusListener
- AThermal_unregisterThermalStatusListener
- AndroidBitmap_compress
- AndroidBitmap_lockPixels
Related Procedures With Returns
jobject ¶
jobject :: distinct rawptr
* Reference types, in C.
Related Procedures With Parameters
- AAssetManager_fromJava
- AFileDescriptor_getFd
- AFileDescriptor_setFd
- AHardwareBuffer_fromHardwareBuffer
- AInputQueue_fromJava
- AKeyEvent_fromJava
- AMotionEvent_fromJava
- ANativeWindow_fromSurface
- ASharedMemory_dupFromJava
- ASurfaceTexture_fromSurfaceTexture
- AndroidBitmap_getDataSpace
- AndroidBitmap_getHardwareBuffer
- AndroidBitmap_getInfo
- AndroidBitmap_lockPixels
- AndroidBitmap_unlockPixels
Related Procedures With Returns
jobjectArray ¶
jobjectArray :: distinct rawptr
jobjectRefType ¶
jobjectRefType :: enum i32 { JNIInvalidRefType = 0, JNILocalRefType = 1, JNIGlobalRefType = 2, JNIWeakGlobalRefType = 3, }
jshortArray ¶
jshortArray :: distinct rawptr
jsize ¶
jsize :: i32
"cardinal indices and sizes"
Related Procedures With Parameters
- AConfiguration_setDensity
- AConfiguration_setGrammaticalGender
- AConfiguration_setKeyboard
- AConfiguration_setKeysHidden
- AConfiguration_setLayoutDirection
- AConfiguration_setMcc
- AConfiguration_setMnc
- AConfiguration_setNavHidden
- AConfiguration_setNavigation
- AConfiguration_setOrientation
- AConfiguration_setScreenHeightDp
- AConfiguration_setScreenLong
- AConfiguration_setScreenRound
- AConfiguration_setScreenSize
- AConfiguration_setScreenWidthDp
- AConfiguration_setSdkVersion
- AConfiguration_setSmallestScreenWidthDp
- AConfiguration_setTouchscreen
- AConfiguration_setUiModeNight
- AConfiguration_setUiModeType
- AFileDescriptor_setFd
- AHardwareBuffer_lock
- AHardwareBuffer_lockAndGetInfo
- AHardwareBuffer_lockPlanes
- AHardwareBuffer_recvHandleFromUnixSocket
- AHardwareBuffer_sendHandleToUnixSocket
- AHardwareBuffer_unlock
- AImageDecoder_computeSampledSize
- AImageDecoder_createFromFd
- AImageDecoder_setTargetSize
- AInputQueue_attachLooper
- AInputQueue_finishEvent
- ALooper_addFd
- ALooper_pollAll
- ALooper_pollOnce
- ALooper_prepare
- ALooper_removeFd
- ANativeWindow_setBuffersGeometry
- ANeuralNetworksEvent_createFromSyncFenceFd
- ANeuralNetworksEvent_getSyncFenceFd
- ANeuralNetworksExecution_getOutputOperandDimensions
- ANeuralNetworksExecution_getOutputOperandRank
- ANeuralNetworksExecution_setInput
- ANeuralNetworksExecution_setInputFromMemory
- ANeuralNetworksExecution_setOutput
- ANeuralNetworksExecution_setOutputFromMemory
- ANeuralNetworksMemory_createFromFd
- ANeuralNetworksModel_setOperandSymmPerChannelQuantParams
- ANeuralNetworksModel_setOperandValue
- ANeuralNetworksModel_setOperandValueFromMemory
- ANeuralNetworksModel_setOperandValueFromModel
- APerformanceHint_createSession
- APerformanceHint_setThreads
- APermissionManager_checkPermission
- ASensorEventQueue_registerSensor
- ASensorEventQueue_setEventRate
- ASensorManager_configureDirectReport
- ASensorManager_createEventQueue
- ASensorManager_createSharedMemoryDirectChannel
- ASensorManager_destroyDirectChannel
- ASharedMemory_getSize
- ASharedMemory_setProt
- AStorageManager_unmountObb
- ASurfaceTransaction_setBuffer
- ASurfaceTransaction_setBufferWithRelease
- ASurfaceTransaction_setPosition
- ASurfaceTransaction_setZOrder
- AThermal_getThermalHeadroom
- ATrace_beginAsyncSection
- ATrace_endAsyncSection
- AndroidBitmap_compress
- android_dlopen_ext
- android_fdsan_close_with_tag
- android_fdsan_exchange_owner_tag
- android_fdsan_get_owner_tag
- android_res_cancel
- android_res_nquery
- android_res_nresult
- android_setsocknetwork
- android_tag_socket
- android_tag_socket_with_uid
- android_untag_socket
- sync_merge
Related Procedures With Returns
- AAsset_isAllocated
- AAsset_openFileDescriptor
- AAsset_openFileDescriptor64
- AAsset_read
- AConfiguration_diff
- AConfiguration_getDensity
- AConfiguration_getGrammaticalGender
- AConfiguration_getKeyboard
- AConfiguration_getKeysHidden
- AConfiguration_getLayoutDirection
- AConfiguration_getMcc
- AConfiguration_getMnc
- AConfiguration_getNavHidden
- AConfiguration_getNavigation
- AConfiguration_getOrientation
- AConfiguration_getScreenHeightDp
- AConfiguration_getScreenLong
- AConfiguration_getScreenRound
- AConfiguration_getScreenSize
- AConfiguration_getScreenWidthDp
- AConfiguration_getSdkVersion
- AConfiguration_getSmallestScreenWidthDp
- AConfiguration_getTouchscreen
- AConfiguration_getUiModeNight
- AConfiguration_getUiModeType
- AConfiguration_isBetterThan
- AConfiguration_match
- AFileDescriptor_getFd
- AHardwareBuffer_allocate
- AHardwareBuffer_getId
- AHardwareBuffer_isSupported
- AImageDecoderFrameInfo_getBlendOp
- AImageDecoderFrameInfo_getDisposeOp
- AImageDecoderHeaderInfo_getHeight
- AImageDecoderHeaderInfo_getWidth
- AImageDecoder_getRepeatCount
- AInputEvent_getDeviceId
- AInputQueue_getEvent
- AInputQueue_hasEvents
- AInputQueue_preDispatchEvent
- AKeyEvent_getRepeatCount
- AKeyEvent_getScanCode
- AMotionEvent_getButtonState
- AMotionEvent_getPointerId
- ANativeWindow_clearFrameRate
- ANativeWindow_getHeight
- ANativeWindow_getWidth
- ANativeWindow_lock
- ANativeWindow_setBuffersDataSpace
- ANativeWindow_setBuffersTransform
- ANativeWindow_setFrameRate
- ANativeWindow_setFrameRateWithChangeStrategy
- ANativeWindow_unlockAndPost
- AObbInfo_getVersion
- APerformanceHint_reportActualWorkDuration
- APerformanceHint_reportActualWorkDuration2
- APerformanceHint_setPreferPowerEfficiency
- APerformanceHint_updateTargetWorkDuration
- ASensorEventQueue_disableSensor
- ASensorEventQueue_enableSensor
- ASensorEventQueue_hasEvents
- ASensorEventQueue_requestAdditionalInfoEvents
- ASensorManager_createHardwareBufferDirectChannel
- ASensorManager_destroyEventQueue
- ASensorManager_getSensorList
- ASensor_getFifoMaxEventCount
- ASensor_getFifoReservedEventCount
- ASensor_getHandle
- ASensor_getMinDelay
- ASharedMemory_create
- ASharedMemory_dupFromJava
- AStorageManager_isObbMounted
- ASurfaceTexture_attachToGLContext
- ASurfaceTexture_detachFromGLContext
- ASurfaceTexture_updateTexImage
- ASurfaceTransactionStats_getPresentFenceFd
- ASurfaceTransactionStats_getPreviousReleaseFenceFd
- AThermal_getThermalHeadroomThresholds
- AThermal_registerThermalStatusListener
- AThermal_unregisterThermalStatusListener
- android_getaddrinfofornetwork
- android_getprocdns
- android_getprocnetwork
- android_res_nsend
- android_setprocdns
- android_setprocnetwork
jthrowable ¶
jthrowable :: distinct rawptr
net_handle_t ¶
net_handle_t :: distinct u64
* * The corresponding C type for android.net.Network#getNetworkHandle() return * values. The Java signed long value can be safely cast to a net_handle_t: * * [C] ((net_handle_t) java_long_network_handle) * [C++] static_cast<net_handle_t>(java_long_network_handle) * * as appropriate.
Related Procedures With Parameters
Related Constants
off64_t ¶
off64_t :: i64
Related Procedures With Parameters
- AAsset_openFileDescriptor
- AAsset_openFileDescriptor64
- AAsset_seek
- AAsset_seek64
- AChoreographer_postFrameCallbackDelayed
- APerformanceHint_createSession
- APerformanceHint_reportActualWorkDuration
- APerformanceHint_updateTargetWorkDuration
- ASensorEventQueue_registerSensor
- ASurfaceTransaction_setDesiredPresentTime
- ASurfaceTransaction_setFrameTimeline
- ATrace_setCounter
- AWorkDuration_setActualCpuDurationNanos
- AWorkDuration_setActualGpuDurationNanos
- AWorkDuration_setActualTotalDurationNanos
- AWorkDuration_setWorkPeriodStartTimestampNanos
Related Procedures With Returns
- AAsset_getLength
- AAsset_getLength64
- AAsset_getRemainingLength
- AAsset_getRemainingLength64
- AChoreographerFrameCallbackData_getFrameTimeNanos
- AChoreographerFrameCallbackData_getFrameTimelineDeadlineNanos
- AChoreographerFrameCallbackData_getFrameTimelineExpectedPresentationTimeNanos
- AChoreographerFrameCallbackData_getFrameTimelineVsyncId
- AImageDecoderFrameInfo_getDuration
- AKeyEvent_getDownTime
- AKeyEvent_getEventTime
- AMotionEvent_getDownTime
- AMotionEvent_getEventTime
- AMotionEvent_getHistoricalEventTime
- APerformanceHint_getPreferredUpdateRateNanos
- ASurfaceTexture_getTimestamp
- ASurfaceTransactionStats_getAcquireTime
- ASurfaceTransactionStats_getLatchTime
off_t ¶
off_t :: i64
Related Procedures With Parameters
- AAsset_openFileDescriptor
- AAsset_openFileDescriptor64
- AAsset_seek
- AAsset_seek64
- AChoreographer_postFrameCallbackDelayed
- APerformanceHint_createSession
- APerformanceHint_reportActualWorkDuration
- APerformanceHint_updateTargetWorkDuration
- ASensorEventQueue_registerSensor
- ASurfaceTransaction_setDesiredPresentTime
- ASurfaceTransaction_setFrameTimeline
- ATrace_setCounter
- AWorkDuration_setActualCpuDurationNanos
- AWorkDuration_setActualGpuDurationNanos
- AWorkDuration_setActualTotalDurationNanos
- AWorkDuration_setWorkPeriodStartTimestampNanos
Related Procedures With Returns
- AAsset_getLength
- AAsset_getLength64
- AAsset_getRemainingLength
- AAsset_getRemainingLength64
- AChoreographerFrameCallbackData_getFrameTimeNanos
- AChoreographerFrameCallbackData_getFrameTimelineDeadlineNanos
- AChoreographerFrameCallbackData_getFrameTimelineExpectedPresentationTimeNanos
- AChoreographerFrameCallbackData_getFrameTimelineVsyncId
- AImageDecoderFrameInfo_getDuration
- AKeyEvent_getDownTime
- AKeyEvent_getEventTime
- AMotionEvent_getDownTime
- AMotionEvent_getEventTime
- AMotionEvent_getHistoricalEventTime
- APerformanceHint_getPreferredUpdateRateNanos
- ASurfaceTexture_getTimestamp
- ASurfaceTransactionStats_getAcquireTime
- ASurfaceTransactionStats_getLatchTime
pid_t ¶
pid_t :: i32
TODO: move to libc or something
Related Procedures With Parameters
- AConfiguration_setDensity
- AConfiguration_setGrammaticalGender
- AConfiguration_setKeyboard
- AConfiguration_setKeysHidden
- AConfiguration_setLayoutDirection
- AConfiguration_setMcc
- AConfiguration_setMnc
- AConfiguration_setNavHidden
- AConfiguration_setNavigation
- AConfiguration_setOrientation
- AConfiguration_setScreenHeightDp
- AConfiguration_setScreenLong
- AConfiguration_setScreenRound
- AConfiguration_setScreenSize
- AConfiguration_setScreenWidthDp
- AConfiguration_setSdkVersion
- AConfiguration_setSmallestScreenWidthDp
- AConfiguration_setTouchscreen
- AConfiguration_setUiModeNight
- AConfiguration_setUiModeType
- AFileDescriptor_setFd
- AHardwareBuffer_lock
- AHardwareBuffer_lockAndGetInfo
- AHardwareBuffer_lockPlanes
- AHardwareBuffer_recvHandleFromUnixSocket
- AHardwareBuffer_sendHandleToUnixSocket
- AHardwareBuffer_unlock
- AImageDecoder_computeSampledSize
- AImageDecoder_createFromFd
- AImageDecoder_setTargetSize
- AInputQueue_attachLooper
- AInputQueue_finishEvent
- ALooper_addFd
- ALooper_pollAll
- ALooper_pollOnce
- ALooper_prepare
- ALooper_removeFd
- ANativeWindow_setBuffersGeometry
- ANeuralNetworksEvent_createFromSyncFenceFd
- ANeuralNetworksEvent_getSyncFenceFd
- ANeuralNetworksExecution_getOutputOperandDimensions
- ANeuralNetworksExecution_getOutputOperandRank
- ANeuralNetworksExecution_setInput
- ANeuralNetworksExecution_setInputFromMemory
- ANeuralNetworksExecution_setOutput
- ANeuralNetworksExecution_setOutputFromMemory
- ANeuralNetworksMemory_createFromFd
- ANeuralNetworksModel_setOperandSymmPerChannelQuantParams
- ANeuralNetworksModel_setOperandValue
- ANeuralNetworksModel_setOperandValueFromMemory
- ANeuralNetworksModel_setOperandValueFromModel
- APerformanceHint_createSession
- APerformanceHint_setThreads
- APermissionManager_checkPermission
- ASensorEventQueue_registerSensor
- ASensorEventQueue_setEventRate
- ASensorManager_configureDirectReport
- ASensorManager_createEventQueue
- ASensorManager_createSharedMemoryDirectChannel
- ASensorManager_destroyDirectChannel
- ASharedMemory_getSize
- ASharedMemory_setProt
- AStorageManager_unmountObb
- ASurfaceTransaction_setBuffer
- ASurfaceTransaction_setBufferWithRelease
- ASurfaceTransaction_setPosition
- ASurfaceTransaction_setZOrder
- AThermal_getThermalHeadroom
- ATrace_beginAsyncSection
- ATrace_endAsyncSection
- AndroidBitmap_compress
- android_dlopen_ext
- android_fdsan_close_with_tag
- android_fdsan_exchange_owner_tag
- android_fdsan_get_owner_tag
- android_res_cancel
- android_res_nquery
- android_res_nresult
- android_setsocknetwork
- android_tag_socket
- android_tag_socket_with_uid
- android_untag_socket
- sync_merge
Related Procedures With Returns
- AAsset_isAllocated
- AAsset_openFileDescriptor
- AAsset_openFileDescriptor64
- AAsset_read
- AConfiguration_diff
- AConfiguration_getDensity
- AConfiguration_getGrammaticalGender
- AConfiguration_getKeyboard
- AConfiguration_getKeysHidden
- AConfiguration_getLayoutDirection
- AConfiguration_getMcc
- AConfiguration_getMnc
- AConfiguration_getNavHidden
- AConfiguration_getNavigation
- AConfiguration_getOrientation
- AConfiguration_getScreenHeightDp
- AConfiguration_getScreenLong
- AConfiguration_getScreenRound
- AConfiguration_getScreenSize
- AConfiguration_getScreenWidthDp
- AConfiguration_getSdkVersion
- AConfiguration_getSmallestScreenWidthDp
- AConfiguration_getTouchscreen
- AConfiguration_getUiModeNight
- AConfiguration_getUiModeType
- AConfiguration_isBetterThan
- AConfiguration_match
- AFileDescriptor_getFd
- AHardwareBuffer_allocate
- AHardwareBuffer_getId
- AHardwareBuffer_isSupported
- AImageDecoderFrameInfo_getBlendOp
- AImageDecoderFrameInfo_getDisposeOp
- AImageDecoderHeaderInfo_getHeight
- AImageDecoderHeaderInfo_getWidth
- AImageDecoder_getRepeatCount
- AInputEvent_getDeviceId
- AInputQueue_getEvent
- AInputQueue_hasEvents
- AInputQueue_preDispatchEvent
- AKeyEvent_getRepeatCount
- AKeyEvent_getScanCode
- AMotionEvent_getButtonState
- AMotionEvent_getPointerId
- ANativeWindow_clearFrameRate
- ANativeWindow_getHeight
- ANativeWindow_getWidth
- ANativeWindow_lock
- ANativeWindow_setBuffersDataSpace
- ANativeWindow_setBuffersTransform
- ANativeWindow_setFrameRate
- ANativeWindow_setFrameRateWithChangeStrategy
- ANativeWindow_unlockAndPost
- AObbInfo_getVersion
- APerformanceHint_reportActualWorkDuration
- APerformanceHint_reportActualWorkDuration2
- APerformanceHint_setPreferPowerEfficiency
- APerformanceHint_updateTargetWorkDuration
- ASensorEventQueue_disableSensor
- ASensorEventQueue_enableSensor
- ASensorEventQueue_hasEvents
- ASensorEventQueue_requestAdditionalInfoEvents
- ASensorManager_createHardwareBufferDirectChannel
- ASensorManager_destroyEventQueue
- ASensorManager_getSensorList
- ASensor_getFifoMaxEventCount
- ASensor_getFifoReservedEventCount
- ASensor_getHandle
- ASensor_getMinDelay
- ASharedMemory_create
- ASharedMemory_dupFromJava
- AStorageManager_isObbMounted
- ASurfaceTexture_attachToGLContext
- ASurfaceTexture_detachFromGLContext
- ASurfaceTexture_updateTexImage
- ASurfaceTransactionStats_getPresentFenceFd
- ASurfaceTransactionStats_getPreviousReleaseFenceFd
- AThermal_getThermalHeadroomThresholds
- AThermal_registerThermalStatusListener
- AThermal_unregisterThermalStatusListener
- android_getaddrinfofornetwork
- android_getprocdns
- android_getprocnetwork
- android_res_nsend
- android_setprocdns
- android_setprocnetwork
socklen_t ¶
socklen_t :: u32
See: https://android.googlesource.com/platform/bionic/+/main/docs/32-bit-abi.md
Related Procedures With Parameters
- AChoreographer_postFrameCallbackDelayed64
- AFontMatcher_match
- AFont_getAxisTag
- AFont_getAxisValue
- ANeuralNetworksCompilation_createForDevices
- ANeuralNetworksCompilation_getPreferredMemoryAlignmentForInput
- ANeuralNetworksCompilation_getPreferredMemoryAlignmentForOutput
- ANeuralNetworksCompilation_getPreferredMemoryPaddingForInput
- ANeuralNetworksCompilation_getPreferredMemoryPaddingForOutput
- ANeuralNetworksExecution_getOutputOperandDimensions
- ANeuralNetworksExecution_getOutputOperandRank
- ANeuralNetworksExecution_startComputeWithDependencies
- ANeuralNetworksMemoryDesc_addInputRole
- ANeuralNetworksMemoryDesc_addOutputRole
- ANeuralNetworksMemoryDesc_setDimensions
- ANeuralNetworksModel_addOperation
- ANeuralNetworksModel_getSupportedOperationsForDevices
- ANeuralNetworksModel_identifyInputsAndOutputs
- ANeuralNetworks_getDevice
- ANeuralNetworks_getDeviceCount
- APermissionManager_checkPermission
- ASurfaceTexture_attachToGLContext
- ASurfaceTransaction_setDamageRegion
- android_tag_socket
- android_tag_socket_with_uid
sync_fence_info ¶
sync_fence_info :: struct { obj_name: [32]u8, driver_name: [32]u8, status: i32, flags: u32, timestamp_ns: u64, }
TODO: move to sys/linux or sys/android or something idk.
Related Procedures With Returns
sync_file_info ¶
sync_file_info :: struct { name: [32]u8, status: i32, flags: u32, num_fences: u32, pad: u32, sync_fence_info: u64, }
Related Procedures With Parameters
uid_t ¶
uid_t :: u32
TODO: these should probably be put in android's libc bindings or something
Related Procedures With Parameters
- AChoreographer_postFrameCallbackDelayed64
- AFontMatcher_match
- AFont_getAxisTag
- AFont_getAxisValue
- ANeuralNetworksCompilation_createForDevices
- ANeuralNetworksCompilation_getPreferredMemoryAlignmentForInput
- ANeuralNetworksCompilation_getPreferredMemoryAlignmentForOutput
- ANeuralNetworksCompilation_getPreferredMemoryPaddingForInput
- ANeuralNetworksCompilation_getPreferredMemoryPaddingForOutput
- ANeuralNetworksExecution_getOutputOperandDimensions
- ANeuralNetworksExecution_getOutputOperandRank
- ANeuralNetworksExecution_startComputeWithDependencies
- ANeuralNetworksMemoryDesc_addInputRole
- ANeuralNetworksMemoryDesc_addOutputRole
- ANeuralNetworksMemoryDesc_setDimensions
- ANeuralNetworksModel_addOperation
- ANeuralNetworksModel_getSupportedOperationsForDevices
- ANeuralNetworksModel_identifyInputsAndOutputs
- ANeuralNetworks_getDevice
- ANeuralNetworks_getDeviceCount
- APermissionManager_checkPermission
- ASurfaceTexture_attachToGLContext
- ASurfaceTransaction_setDamageRegion
- android_tag_socket
- android_tag_socket_with_uid
Constants
ALOOPER_PREPARE_ALLOW_NON_CALLBACKS ¶
ALOOPER_PREPARE_ALLOW_NON_CALLBACKS: int : 1 << 0
Option for for ALooper_prepare(). * This looper will accept calls to ALooper_addFd() that do not have a callback (that is provide NULL for the callback). In this case the caller of ALooper_pollOnce() or ALooper_pollAll() MUST check the return from these functions to discover when data is available on such fds and process it.
ANEURALNETWORKS_BYTE_SIZE_OF_CACHE_TOKEN ¶
ANEURALNETWORKS_BYTE_SIZE_OF_CACHE_TOKEN: int : 32
* * For {@link ANeuralNetworksCompilation_setCaching}, specify the size * of the cache token required from the application. The size is in bytes. * * Available since NNAPI feature level 3.
ANEURALNETWORKS_MAX_SIZE_OF_IMMEDIATELY_COPIED_VALUES ¶
ANEURALNETWORKS_MAX_SIZE_OF_IMMEDIATELY_COPIED_VALUES: int : 128
* * For {@link ANeuralNetworksModel_setOperandValue}, values with a * length smaller or equal to this will be immediately copied into * the model. The size is in bytes. * * Available since NNAPI feature level 1.
ANY_INPUT_SOURCE ¶
ANY_INPUT_SOURCE: bit_set[InputSourceDeviceBits; i32] : InputSourceDevice{.KEYBOARD, .DPAD, .GAMEPAD, .TOUCHSCREEN, .MOUSE, .STYLUS, .BLUETOOTH_STYLUS, .TRACKBALL, .MOUSE_RELATIVE, .TOUCHPAD, .TOUCH_NAVIGATION, .JOYSTICK, .HDMI, .SENSOR, .ROTARY_ENCODER}
ASENSOR_DELAY_INVALID ¶
ASENSOR_DELAY_INVALID: i32 : min(i32)
ASENSOR_FIFO_COUNT_INVALID ¶
ASENSOR_FIFO_COUNT_INVALID: int : -1
ASENSOR_INVALID ¶
ASENSOR_INVALID: int : -1
ASENSOR_MAGNETIC_FIELD_EARTH_MAX ¶
ASENSOR_MAGNETIC_FIELD_EARTH_MAX: f64 : 60.0
Maximum magnetic field on Earth's surface in uT
ASENSOR_MAGNETIC_FIELD_EARTH_MIN ¶
ASENSOR_MAGNETIC_FIELD_EARTH_MIN: f64 : 30.0
Minimum magnetic field on Earth's surface in uT
ASENSOR_RESOLUTION_INVALID ¶
ASENSOR_RESOLUTION_INVALID :: f32(0h7ff80000_00000001)
AllDLextFlags ¶
AllDLextFlags: bit_set[DLextFlagsBits; u64] : DLextFlags{.RESERVED_ADDRESS, .RESERVED_ADDRESS_HINT, .WRITE_RELRO, .USE_RELRO, .USE_LIBRARY_FD, .USE_LIBRARY_FD_OFFSET, .FORCE_LOAD, .USE_NAMESPACE, .RESERVED_ADDRESS_RECURSIVE}
COLOR_MODE ¶
COLOR_MODE: int : 0x10000
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#WideColorGamutQualifier">wide color gamut</a> and <a href::"/guide/topics/resources/providing-resources.html#HDRQualifier">HDR</a> configurations.
DENSITY ¶
DENSITY: int : 0x0100
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#DensityQualifier">density</a> configuration.
DENSITY_HIGH ¶
DENSITY_HIGH: int : 240
* Density: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#DensityQualifier">hdpi</a> resource qualifier.
DENSITY_LOW ¶
DENSITY_LOW: int : 120
* Density: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#DensityQualifier">ldpi</a> resource qualifier.
DENSITY_MEDIUM ¶
DENSITY_MEDIUM: int : 160
* Density: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#DensityQualifier">mdpi</a> resource qualifier.
DENSITY_TV ¶
DENSITY_TV: int : 213
* Density: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#DensityQualifier">tvdpi</a> resource qualifier.
DENSITY_XHIGH ¶
DENSITY_XHIGH: int : 320
* Density: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#DensityQualifier">xhdpi</a> resource qualifier.
DENSITY_XXHIGH ¶
DENSITY_XXHIGH: int : 480
* Density: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#DensityQualifier">xxhdpi</a> resource qualifier.
DENSITY_XXXHIGH ¶
DENSITY_XXXHIGH: int : 640
* Density: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#DensityQualifier">xxxhdpi</a> resource qualifier.
GRAMMATICAL_GENDER ¶
GRAMMATICAL_GENDER: int : 0x20000
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#GrammaticalInflectionQualifier">grammatical gender</a> configuration.
GRAMMATICAL_GENDER_ANY ¶
GRAMMATICAL_GENDER_ANY: int : 0
* <a href::"/guide/topics/resources/providing-resources.html#GrammaticalInflectionQualifier">Grammatical gender</a>: not specified.
GRAMMATICAL_GENDER_FEMININE ¶
GRAMMATICAL_GENDER_FEMININE: int : 2
* <a href::"/guide/topics/resources/providing-resources.html#GrammaticalInflectionQualifier">Grammatical gender</a>: feminine.
GRAMMATICAL_GENDER_MASCULINE ¶
GRAMMATICAL_GENDER_MASCULINE: int : 3
* <a href::"/guide/topics/resources/providing-resources.html#GrammaticalInflectionQualifier">Grammatical gender</a>: masculine.
GRAMMATICAL_GENDER_NEUTER ¶
GRAMMATICAL_GENDER_NEUTER: int : 1
* <a href::"/guide/topics/resources/providing-resources.html#GrammaticalInflectionQualifier">Grammatical gender</a>: neuter.
HDR_NO ¶
HDR_NO: int : 0x1
* HDR: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#HDRQualifier"> lowdr</a> resource qualifier specified.
HDR_YES ¶
HDR_YES: int : 0x2
* HDR: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#HDRQualifier"> highdr</a> resource qualifier specified.
JNI_VERSION_1_1 ¶
JNI_VERSION_1_1: int : 0x00010001
JNI_VERSION_1_2 ¶
JNI_VERSION_1_2: int : 0x00010002
JNI_VERSION_1_4 ¶
JNI_VERSION_1_4: int : 0x00010004
JNI_VERSION_1_6 ¶
JNI_VERSION_1_6: int : 0x00010006
KEYBOARD ¶
KEYBOARD: int : 0x0010
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#ImeQualifier">keyboard</a> configuration.
KEYBOARD_12KEY ¶
KEYBOARD_12KEY: int : 0x0003
* Keyboard: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#ImeQualifier">12key</a> resource qualifier.
KEYBOARD_HIDDEN ¶
KEYBOARD_HIDDEN: int : 0x0020
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#KeyboardAvailQualifier">keyboardHidden</a> configuration.
KEYBOARD_NOKEYS ¶
KEYBOARD_NOKEYS: int : 0x0001
* Keyboard: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#ImeQualifier">nokeys</a> resource qualifier.
KEYBOARD_QWERTY ¶
KEYBOARD_QWERTY: int : 0x0002
* Keyboard: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#ImeQualifier">qwerty</a> resource qualifier.
KEYSHIDDEN_NO ¶
KEYSHIDDEN_NO: int : 0x0001
* Keyboard availability: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#KeyboardAvailQualifier">keysexposed</a> resource qualifier.
KEYSHIDDEN_SOFT ¶
KEYSHIDDEN_SOFT: int : 0x0003
* Keyboard availability: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#KeyboardAvailQualifier">keyssoft</a> resource qualifier.
KEYSHIDDEN_YES ¶
KEYSHIDDEN_YES: int : 0x0002
* Keyboard availability: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#KeyboardAvailQualifier">keyshidden</a> resource qualifier.
LAYOUTDIR ¶
LAYOUTDIR: int : 0x4000
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#LayoutDirectionQualifier">layout direction</a> configuration.
LAYOUTDIR_LTR ¶
LAYOUTDIR_LTR: int : 0x01
* Layout direction: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#LayoutDirectionQualifier">ldltr</a> resource qualifier specified.
LAYOUTDIR_RTL ¶
LAYOUTDIR_RTL: int : 0x02
* Layout direction: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#LayoutDirectionQualifier">ldrtl</a> resource qualifier specified.
LOCALE ¶
LOCALE: int : 0x0004
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#LocaleQualifier">locale</a> configuration.
MCC ¶
MCC: int : 0x0001
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#MccQualifier">mcc</a> configuration.
MNC ¶
MNC: int : 0x0002
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#MccQualifier">mnc</a> configuration.
MNC_ZERO ¶
MNC_ZERO: int : 0xffff
* Constant used to to represent MNC (Mobile Network Code) zero. 0 cannot be used since it is used to represent an undefined MNC.
NAVHIDDEN_NO ¶
NAVHIDDEN_NO: int : 0x0001
* Navigation availability: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#NavAvailQualifier">navexposed</a> resource qualifier.
NAVHIDDEN_YES ¶
NAVHIDDEN_YES: int : 0x0002
* Navigation availability: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#NavAvailQualifier">navhidden</a> resource qualifier.
NAVIGATION ¶
NAVIGATION: int : 0x0040
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#NavigationQualifier">navigation</a> configuration.
NAVIGATION_DPAD ¶
NAVIGATION_DPAD: int : 0x0002
* Navigation: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#NavigationQualifier">dpad</a> resource qualifier.
NAVIGATION_NONAV ¶
NAVIGATION_NONAV: int : 0x0001
* Navigation: value corresponding to the <a href::"@/guide/topics/resources/providing-resources.html#NavigationQualifier">nonav</a> resource qualifier.
NAVIGATION_TRACKBALL ¶
NAVIGATION_TRACKBALL: int : 0x0003
* Navigation: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#NavigationQualifier">trackball</a> resource qualifier.
NAVIGATION_WHEEL ¶
NAVIGATION_WHEEL: int : 0x0004
* Navigation: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#NavigationQualifier">wheel</a> resource qualifier.
NETWORK_UNSPECIFIED ¶
NETWORK_UNSPECIFIED :: net_handle_t(0)
* * The value NETWORK_UNSPECIFIED indicates no specific network. * * For some functions (documented below), a previous binding may be cleared * by an invocation with NETWORK_UNSPECIFIED. * * Depending on the context it may indicate an error. It is expressly * not used to indicate some notion of the "current default network".
ORIENTATION ¶
ORIENTATION: int : 0x0080
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#OrientationQualifier">orientation</a> configuration.
ORIENTATION_LAND ¶
ORIENTATION_LAND: int : 0x0002
* Orientation: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#OrientationQualifier">land</a> resource qualifier.
ORIENTATION_PORT ¶
ORIENTATION_PORT: int : 0x0001
* Orientation: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#OrientationQualifier">port</a> resource qualifier.
SCREENLONG_NO ¶
SCREENLONG_NO: int : 0x1
* Screen layout: value that corresponds to the <a href::"/guide/topics/resources/providing-resources.html#ScreenAspectQualifier">notlong</a> resource qualifier.
SCREENLONG_YES ¶
SCREENLONG_YES: int : 0x2
* Screen layout: value that corresponds to the <a href::"/guide/topics/resources/providing-resources.html#ScreenAspectQualifier">long</a> resource qualifier.
SCREENROUND_ANY ¶
SCREENROUND_ANY: int : 0x00
SCREENROUND_NO ¶
SCREENROUND_NO: int : 0x1
SCREENROUND_YES ¶
SCREENROUND_YES: int : 0x2
SCREENSIZE_LARGE ¶
SCREENSIZE_LARGE: int : 0x03
* Screen size: value indicating the screen is at least approximately 480x640 dp units corresponding to the <a href::"/guide/topics/resources/providing-resources.html#ScreenSizeQualifier">large</a> resource qualifier.
SCREENSIZE_NORMAL ¶
SCREENSIZE_NORMAL: int : 0x02
* Screen size: value indicating the screen is at least approximately 320x470 dp units corresponding to the <a href::"/guide/topics/resources/providing-resources.html#ScreenSizeQualifier">normal</a> resource qualifier.
SCREENSIZE_SMALL ¶
SCREENSIZE_SMALL: int : 0x01
* Screen size: value indicating the screen is at least approximately 320x426 dp units corresponding to the <a href::"/guide/topics/resources/providing-resources.html#ScreenSizeQualifier">small</a> resource qualifier.
SCREENSIZE_XLARGE ¶
SCREENSIZE_XLARGE: int : 0x04
* Screen size: value indicating the screen is at least approximately 720x960 dp units corresponding to the <a href::"/guide/topics/resources/providing-resources.html#ScreenSizeQualifier">xlarge</a> resource qualifier.
SCREEN_ROUND ¶
SCREEN_ROUND: int : 0x8000
SCREEN_SIZE ¶
SCREEN_SIZE: int : 0x0200
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#ScreenSizeQualifier">screen size</a> configuration.
SMALLEST_SCREEN_SIZE ¶
SMALLEST_SCREEN_SIZE: int : 0x2000
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#SmallestScreenWidthQualifier">smallest screen width</a> configuration.
SMALLEST_SCREEN_WIDTH_DP_ANY ¶
SMALLEST_SCREEN_WIDTH_DP_ANY: int : 0x0000
Smallest screen width DPI: not specified.
TOUCHSCREEN ¶
TOUCHSCREEN: int : 0x0008
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#TouchscreenQualifier">touchscreen</a> configuration.
TOUCHSCREEN_FINGER ¶
TOUCHSCREEN_FINGER: int : 0x0003
* Touchscreen: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#TouchscreenQualifier">finger</a> resource qualifier.
TOUCHSCREEN_NOTOUCH ¶
TOUCHSCREEN_NOTOUCH: int : 0x0001
* Touchscreen: value corresponding to the <a href::"/guide/topics/resources/providing-resources.html#TouchscreenQualifier">notouch</a> resource qualifier.
UI_MODE ¶
UI_MODE: int : 0x1000
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#UiModeQualifier">ui mode</a> configuration.
UI_MODE_NIGHT_NO ¶
UI_MODE_NIGHT_NO: int : 0x1
* UI night mode: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#NightQualifier">notnight</a> resource qualifier specified.
UI_MODE_NIGHT_YES ¶
UI_MODE_NIGHT_YES: int : 0x2
* UI night mode: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#NightQualifier">night</a> resource qualifier specified.
UI_MODE_TYPE_APPLIANCE ¶
UI_MODE_TYPE_APPLIANCE: int : 0x05
* UI mode: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#UiModeQualifier">appliance</a> resource qualifier specified.
UI_MODE_TYPE_CAR ¶
UI_MODE_TYPE_CAR: int : 0x03
* UI mode: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#UiModeQualifier">car</a> resource qualifier specified.
UI_MODE_TYPE_DESK ¶
UI_MODE_TYPE_DESK: int : 0x02
* UI mode: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#UiModeQualifier">desk</a> resource qualifier specified.
UI_MODE_TYPE_NORMAL ¶
UI_MODE_TYPE_NORMAL: int : 0x01
* UI mode: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#UiModeQualifier">no UI mode type</a> resource qualifier specified.
UI_MODE_TYPE_TELEVISION ¶
UI_MODE_TYPE_TELEVISION: int : 0x04
* UI mode: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#UiModeQualifier">television</a> resource qualifier specified.
UI_MODE_TYPE_VR_HEADSET ¶
UI_MODE_TYPE_VR_HEADSET: int : 0x07
* UI mode: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#UiModeQualifier">vr</a> resource qualifier specified.
UI_MODE_TYPE_WATCH ¶
UI_MODE_TYPE_WATCH: int : 0x06
* UI mode: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#UiModeQualifier">watch</a> resource qualifier specified.
VERSION ¶
VERSION: int : 0x0400
* Bit mask for <a href::"/guide/topics/resources/providing-resources.html#VersionQualifier">platform version</a> configuration.
WIDE_COLOR_GAMUT_NO ¶
WIDE_COLOR_GAMUT_NO: int : 0x1
* Wide color gamut: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#WideColorGamutQualifier">no nowidecg</a> resource qualifier specified.
WIDE_COLOR_GAMUT_YES ¶
WIDE_COLOR_GAMUT_YES: int : 0x2
* Wide color gamut: value that corresponds to <a href::"/guide/topics/resources/providing-resources.html#WideColorGamutQualifier"> widecg</a> resource qualifier specified.
Variables
This section is empty.
Procedures
AAssetDir_close ¶
AAssetDir_close :: proc "c" (assetDir: ^AAssetDir) ---
*
* Close an opened AAssetDir, freeing any related resources.
AAssetDir_getNextFileName ¶
*
* Iterate over the files in an asset directory. A NULL string is returned * when all the file names have been returned. * * The returned file name is suitable for passing to AAssetManager_open(). * * The string returned here is owned by the AssetDir implementation and is not * guaranteed to remain valid if any other calls are made on this AAssetDir * instance.
AAssetDir_rewind ¶
AAssetDir_rewind :: proc "c" (assetDir: ^AAssetDir) ---
*
* Reset the iteration state of AAssetDir_getNextFileName() to the beginning.
AAssetManager_fromJava ¶
AAssetManager_fromJava :: proc "c" (env: ^^JNINativeInterface, assetManager: jobject) -> ^AAssetManager ---
*
* Given a Dalvik AssetManager object, obtain the corresponding native AAssetManager * object. Note that the caller is responsible for obtaining and holding a VM reference * to the jobject to prevent its being garbage collected while the native object is * in use.
AAssetManager_open ¶
AAssetManager_open :: proc "c" (mgr: ^AAssetManager, filename: cstring, mode: AssetOpenMode) -> ^AAsset ---
*
* Open an asset. * * The object returned here should be freed by calling AAsset_close().
AAssetManager_openDir ¶
AAssetManager_openDir :: proc "c" (mgr: ^AAssetManager, dirName: cstring) -> ^AAssetDir ---
*
* Open the named directory within the asset hierarchy. The directory can then * be inspected with the AAssetDir functions. To open the top-level directory, * pass in "" as the dirName. * * The object returned here should be freed by calling AAssetDir_close().
AAsset_close ¶
AAsset_close :: proc "c" (asset: ^AAsset) ---
*
* Close the asset, freeing all associated resources.
AAsset_getBuffer ¶
*
* Get a pointer to a buffer holding the entire contents of the assset. * * Returns NULL on failure.
AAsset_getLength ¶
*
* Report the total size of the asset data.
AAsset_getLength64 ¶
*
* Report the total size of the asset data. Reports the size using a 64-bit * number insted of 32-bit as AAsset_getLength.
AAsset_getRemainingLength ¶
*
* Report the total amount of asset data that can be read from the current position.
AAsset_getRemainingLength64 ¶
*
* Report the total amount of asset data that can be read from the current position. * * Uses a 64-bit number instead of a 32-bit number as AAsset_getRemainingLength does.
AAsset_isAllocated ¶
*
* Returns whether this asset's internal buffer is allocated in ordinary RAM (i.e. not * mmapped).
AAsset_openFileDescriptor ¶
*
* Open a new file descriptor that can be used to read the asset data. If the * start or length cannot be represented by a 32-bit number, it will be * truncated. If the file is large, use AAsset_openFileDescriptor64 instead. * * Returns < 0 if direct fd access is not possible (for example, if the asset is * compressed).
AAsset_openFileDescriptor64 ¶
AAsset_openFileDescriptor64 :: proc "c" (asset: ^AAsset, outStart: ^i64, outLength: ^i64) -> i32 ---
*
* Open a new file descriptor that can be used to read the asset data. * * Uses a 64-bit number for the offset and length instead of 32-bit instead of * as AAsset_openFileDescriptor does. * * Returns < 0 if direct fd access is not possible (for example, if the asset is * compressed).
AAsset_read ¶
*
* Attempt to read 'count' bytes of data from the current offset. * * Returns the number of bytes read, zero on EOF, or < 0 on error.
AAsset_seek ¶
AAsset_seek :: proc "c" (asset: ^AAsset, offset: i64, whence: Seek_Whence) -> i64 ---
*
* Seek to the specified offset within the asset data. 'whence' uses the * same constants as lseek()/fseek(). * * Returns the new position on success, or (off_t) -1 on error.
TODO: should we just not bind these functions since their 64-bit counterparts are available? Because according to https://android.googlesource.com/platform/bionic/+/main/docs/32-bit-abi.md and the __RENAME_IF_FILE_OFFSET64() macro these functions are replaced with the 64-bit versions at runtime anyway (at least for 32-bit Android). off_t is replaced with off64_t too. I think all Android API version <21 should explicitly be unsupported both by Odin and the bindings here since this opens a can of worms that I do not like.
AAsset_seek64 ¶
AAsset_seek64 :: proc "c" (asset: ^AAsset, offset: i64, whence: Seek_Whence) -> i64 ---
*
* Seek to the specified offset within the asset data. 'whence' uses the * same constants as lseek()/fseek(). * * Uses 64-bit data type for large files as opposed to the 32-bit type used * by AAsset_seek. * * Returns the new position on success, or (off64_t) -1 on error.
AChoreographerFrameCallbackData_getFrameTimeNanos ¶
AChoreographerFrameCallbackData_getFrameTimeNanos :: proc "c" (data: ^AChoreographerFrameCallbackData) -> i64 ---
*
* The time in nanoseconds at which the frame started being rendered. * * Note that this time should \b not be used to advance animation clocks. * Instead, see AChoreographerFrameCallbackData_getFrameTimelineExpectedPresentationTimeNanos().
AChoreographerFrameCallbackData_getFrameTimelineDeadlineNanos ¶
AChoreographerFrameCallbackData_getFrameTimelineDeadlineNanos :: proc "c" (data: ^AChoreographerFrameCallbackData, index: uint) -> i64 ---
*
* Gets the time in nanoseconds at which the frame described at the given \c index needs to be * ready by in order to be presented on time. * * \param index index of a frame timeline, in \f( [0, FrameTimelinesLength) \f). See * AChoreographerFrameCallbackData_getFrameTimelinesLength()
AChoreographerFrameCallbackData_getFrameTimelineExpectedPresentationTimeNanos ¶
AChoreographerFrameCallbackData_getFrameTimelineExpectedPresentationTimeNanos :: proc "c" (data: ^AChoreographerFrameCallbackData, index: uint) -> i64 ---
*
* Gets the time in nanoseconds at which the frame described at the given \c index is expected to * be presented. This time should be used to advance any animation clocks. * * \param index index of a frame timeline, in \f( [0, FrameTimelinesLength) \f). See * AChoreographerFrameCallbackData_getFrameTimelinesLength()
AChoreographerFrameCallbackData_getFrameTimelineVsyncId ¶
AChoreographerFrameCallbackData_getFrameTimelineVsyncId :: proc "c" (data: ^AChoreographerFrameCallbackData, index: uint) -> i64 ---
*
* Gets the token used by the platform to identify the frame timeline at the given \c index. * * \param index index of a frame timeline, in \f( [0, FrameTimelinesLength) \f). See * AChoreographerFrameCallbackData_getFrameTimelinesLength()
AChoreographerFrameCallbackData_getFrameTimelinesLength ¶
AChoreographerFrameCallbackData_getFrameTimelinesLength :: proc "c" (data: ^AChoreographerFrameCallbackData) -> uint ---
*
* The number of possible frame timelines.
AChoreographerFrameCallbackData_getPreferredFrameTimelineIndex ¶
AChoreographerFrameCallbackData_getPreferredFrameTimelineIndex :: proc "c" (data: ^AChoreographerFrameCallbackData) -> uint ---
*
* Gets the index of the platform-preferred frame timeline. * The preferred frame timeline is the default * by which the platform scheduled the app, based on the device configuration.
AChoreographer_getInstance ¶
AChoreographer_getInstance :: proc "c" () -> ^AChoreographer ---
*
* Get the AChoreographer instance for the current thread. This must be called * on an ALooper thread. * * Available since API level 24.
AChoreographer_postFrameCallback ¶
AChoreographer_postFrameCallback :: proc "c" (choreographer: ^AChoreographer, callback: AChoreographer_frameCallback, data: rawptr) ---
*
* Deprecated: Use AChoreographer_postFrameCallback64 instead. * Deprecated since API level 29
AChoreographer_postFrameCallback64 ¶
AChoreographer_postFrameCallback64 :: proc "c" (choreographer: ^AChoreographer, callback: AChoreographer_frameCallback64, data: rawptr) ---
*
* Post a callback to be run on the next frame. The data pointer provided will * be passed to the callback function when it's called. * * Available since API level 29.
AChoreographer_postFrameCallbackDelayed ¶
AChoreographer_postFrameCallbackDelayed :: proc "c" (choreographer: ^AChoreographer, callback: AChoreographer_frameCallback, data: rawptr, delayMillis: i64) ---
*
* Deprecated: Use AChoreographer_postFrameCallbackDelayed64 instead. * Deprecated since API level 29
AChoreographer_postFrameCallbackDelayed64 ¶
AChoreographer_postFrameCallbackDelayed64 :: proc "c" (choreographer: ^AChoreographer, callback: AChoreographer_frameCallback64, data: rawptr, delayMillis: u32) ---
*
* Post a callback to be run on the frame following the specified delay. The * data pointer provided will be passed to the callback function when it's * called. * * Available since API level 29.
AChoreographer_postVsyncCallback ¶
AChoreographer_postVsyncCallback :: proc "c" (choreographer: ^AChoreographer, callback: AChoreographer_vsyncCallback, data: rawptr) ---
*
* Posts a callback to be run on the next frame. The data pointer provided will * be passed to the callback function when it's called. * * Available since API level 33.
AChoreographer_registerRefreshRateCallback ¶
AChoreographer_registerRefreshRateCallback :: proc "c" (choreographer: ^AChoreographer, callback: AChoreographer_refreshRateCallback, data: rawptr) ---
*
* Registers a callback to be run when the display refresh rate changes. The * data pointer provided will be passed to the callback function when it's * called. The same callback may be registered multiple times, provided that a * different data pointer is provided each time. * * If an application registers a callback for this choreographer instance when * no new callbacks were previously registered, that callback is guaranteed to * be dispatched. However, if the callback and associated data pointer are * unregistered prior to running the callback, then the callback may be silently * dropped. * * This api is thread-safe. Any thread is allowed to register a new refresh * rate callback for the choreographer instance. * * Note that in API level 30, this api is not guaranteed to be atomic with * DisplayManager. That is, calling Display#getRefreshRate very soon after * a refresh rate callback is invoked may return a stale refresh rate. If any * Display properties would be required by this callback, then it is recommended * to listen directly to DisplayManager.DisplayListener#onDisplayChanged events * instead. * * As of API level 31, this api is guaranteed to have a consistent view with DisplayManager * Display#getRefreshRate is guaranteed to not return a stale refresh rate when invoked from this * callback. * * Available since API level 30.
AChoreographer_unregisterRefreshRateCallback ¶
AChoreographer_unregisterRefreshRateCallback :: proc "c" (choreographer: ^AChoreographer, callback: AChoreographer_refreshRateCallback, data: rawptr) ---
*
* Unregisters a callback to be run when the display refresh rate changes, along * with the data pointer previously provided when registering the callback. The * callback is only unregistered when the data pointer matches one that was * previously registered. * * This api is thread-safe. Any thread is allowed to unregister an existing * refresh rate callback for the choreographer instance. When a refresh rate * callback and associated data pointer are unregistered, then there is a * guarantee that when the unregistration completes that that callback will not * be run with the data pointer passed. * * Available since API level 30.
AConfiguration_copy ¶
AConfiguration_copy :: proc "c" (dest: ^AConfiguration, src: ^AConfiguration) ---
*
* Copy the contents of 'src' to 'dest'.
AConfiguration_delete ¶
AConfiguration_delete :: proc "c" (config: ^AConfiguration) ---
*
* Free an AConfiguration that was previously created with * AConfiguration_new().
AConfiguration_diff ¶
AConfiguration_diff :: proc "c" (config1: ^AConfiguration, config2: ^AConfiguration) -> i32 ---
*
* Perform a diff between two configurations. Returns a bit mask of * ACONFIGURATION_* constants, each bit set meaning that configuration element * is different between them.
AConfiguration_fromAssetManager ¶
AConfiguration_fromAssetManager :: proc "c" (out: ^AConfiguration, am: ^AAssetManager) ---
*
* Create and return a new AConfiguration based on the current configuration in
* use in the given {@link AAssetManager}.
AConfiguration_getCountry ¶
AConfiguration_getCountry :: proc "c" (config: ^AConfiguration, outCountry: [^]u8) ---
*
* Return the current country code set in the configuration. The output will * be filled with an array of two characters. They are not 0-terminated. If * a country is not set, they will be 0.
AConfiguration_getDensity ¶
AConfiguration_getDensity :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current ACONFIGURATION_DENSITY_* set in the configuration.
AConfiguration_getGrammaticalGender ¶
AConfiguration_getGrammaticalGender :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the configuration's grammatical gender, or ACONFIGURATION_GRAMMATICAL_GENDER_ANY if * not set. * * Available since API level 34.
AConfiguration_getKeyboard ¶
AConfiguration_getKeyboard :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current ACONFIGURATION_KEYBOARD_* set in the configuration.
AConfiguration_getKeysHidden ¶
AConfiguration_getKeysHidden :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current ACONFIGURATION_KEYSHIDDEN_* set in the configuration.
AConfiguration_getLanguage ¶
AConfiguration_getLanguage :: proc "c" (config: ^AConfiguration, outLanguage: [^]u8) ---
*
* Return the current language code set in the configuration. The output will * be filled with an array of two characters. They are not 0-terminated. If * a language is not set, they will be 0.
AConfiguration_getLayoutDirection ¶
AConfiguration_getLayoutDirection :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the configuration's layout direction, or * ACONFIGURATION_LAYOUTDIR_ANY if not set. * * Available since API level 17.
AConfiguration_getMcc ¶
AConfiguration_getMcc :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current MCC set in the configuration. 0 if not set.
AConfiguration_getMnc ¶
AConfiguration_getMnc :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current MNC set in the configuration. 0 if not set.
AConfiguration_getNavHidden ¶
AConfiguration_getNavHidden :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current ACONFIGURATION_NAVHIDDEN_* set in the configuration.
AConfiguration_getNavigation ¶
AConfiguration_getNavigation :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current ACONFIGURATION_NAVIGATION_* set in the configuration.
AConfiguration_getOrientation ¶
AConfiguration_getOrientation :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current ACONFIGURATION_ORIENTATION_* set in the configuration.
AConfiguration_getScreenHeightDp ¶
AConfiguration_getScreenHeightDp :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current configuration screen height in dp units, or * ACONFIGURATION_SCREEN_HEIGHT_DP_ANY if not set.
AConfiguration_getScreenLong ¶
AConfiguration_getScreenLong :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current ACONFIGURATION_SCREENLONG_* set in the configuration.
AConfiguration_getScreenRound ¶
AConfiguration_getScreenRound :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current ACONFIGURATION_SCREENROUND_* set in the configuration. * * Available since API level 30.
AConfiguration_getScreenSize ¶
AConfiguration_getScreenSize :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current ACONFIGURATION_SCREENSIZE_* set in the configuration.
AConfiguration_getScreenWidthDp ¶
AConfiguration_getScreenWidthDp :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current configuration screen width in dp units, or * ACONFIGURATION_SCREEN_WIDTH_DP_ANY if not set.
AConfiguration_getSdkVersion ¶
AConfiguration_getSdkVersion :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current SDK (API) version set in the configuration.
AConfiguration_getSmallestScreenWidthDp ¶
AConfiguration_getSmallestScreenWidthDp :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the configuration's smallest screen width in dp units, or * ACONFIGURATION_SMALLEST_SCREEN_WIDTH_DP_ANY if not set.
AConfiguration_getTouchscreen ¶
AConfiguration_getTouchscreen :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current ACONFIGURATION_TOUCHSCREEN_* set in the configuration.
AConfiguration_getUiModeNight ¶
AConfiguration_getUiModeNight :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current ACONFIGURATION_UI_MODE_NIGHT_* set in the configuration.
AConfiguration_getUiModeType ¶
AConfiguration_getUiModeType :: proc "c" (config: ^AConfiguration) -> i32 ---
*
* Return the current ACONFIGURATION_UI_MODE_TYPE_* set in the configuration.
AConfiguration_isBetterThan ¶
AConfiguration_isBetterThan :: proc "c" (base: ^AConfiguration, test: ^AConfiguration, requested: ^AConfiguration) -> i32 ---
*
* Determine whether the configuration in 'test' is better than the existing * configuration in 'base'. If 'requested' is non-NULL, this decision is based * on the overall configuration given there. If it is NULL, this decision is * simply based on which configuration is more specific. Returns non-0 if * 'test' is better than 'base'. * * This assumes you have already filtered the configurations with * AConfiguration_match().
AConfiguration_match ¶
AConfiguration_match :: proc "c" (base: ^AConfiguration, requested: ^AConfiguration) -> i32 ---
*
* Determine whether 'base' is a valid configuration for use within the * environment 'requested'. Returns 0 if there are any values in 'base' * that conflict with 'requested'. Returns 1 if it does not conflict.
AConfiguration_new ¶
AConfiguration_new :: proc "c" () -> ^AConfiguration ---
*
* Create a new AConfiguration, initialized with no values set.
AConfiguration_setCountry ¶
AConfiguration_setCountry :: proc "c" (config: ^AConfiguration, country: cstring) ---
*
* Set the current country code in the configuration, from the first two * characters in the string.
AConfiguration_setDensity ¶
AConfiguration_setDensity :: proc "c" (config: ^AConfiguration, density: i32) ---
*
* Set the current density in the configuration.
AConfiguration_setGrammaticalGender ¶
AConfiguration_setGrammaticalGender :: proc "c" (config: ^AConfiguration, value: i32) ---
*
* Set the configuration's grammatical gender to one of the * ACONFIGURATION_GRAMMATICAL_GENDER_* constants. * * Available since API level 34.
AConfiguration_setKeyboard ¶
AConfiguration_setKeyboard :: proc "c" (config: ^AConfiguration, keyboard: i32) ---
*
* Set the current keyboard in the configuration.
AConfiguration_setKeysHidden ¶
AConfiguration_setKeysHidden :: proc "c" (config: ^AConfiguration, keysHidden: i32) ---
*
* Set the current keys hidden in the configuration.
AConfiguration_setLanguage ¶
AConfiguration_setLanguage :: proc "c" (config: ^AConfiguration, language: cstring) ---
*
* Set the current language code in the configuration, from the first two * characters in the string.
AConfiguration_setLayoutDirection ¶
AConfiguration_setLayoutDirection :: proc "c" (config: ^AConfiguration, value: i32) ---
*
* Set the configuration's layout direction. * * Available since API level 17.
AConfiguration_setMcc ¶
AConfiguration_setMcc :: proc "c" (config: ^AConfiguration, mcc: i32) ---
*
* Set the current MCC in the configuration. 0 to clear.
AConfiguration_setMnc ¶
AConfiguration_setMnc :: proc "c" (config: ^AConfiguration, mnc: i32) ---
*
* Set the current MNC in the configuration. 0 to clear.
AConfiguration_setNavHidden ¶
AConfiguration_setNavHidden :: proc "c" (config: ^AConfiguration, navHidden: i32) ---
*
* Set the current nav hidden in the configuration.
AConfiguration_setNavigation ¶
AConfiguration_setNavigation :: proc "c" (config: ^AConfiguration, navigation: i32) ---
*
* Set the current navigation in the configuration.
AConfiguration_setOrientation ¶
AConfiguration_setOrientation :: proc "c" (config: ^AConfiguration, orientation: i32) ---
*
* Set the current orientation in the configuration.
AConfiguration_setScreenHeightDp ¶
AConfiguration_setScreenHeightDp :: proc "c" (config: ^AConfiguration, value: i32) ---
*
* Set the configuration's current screen width in dp units.
AConfiguration_setScreenLong ¶
AConfiguration_setScreenLong :: proc "c" (config: ^AConfiguration, screenLong: i32) ---
*
* Set the current screen long in the configuration.
AConfiguration_setScreenRound ¶
AConfiguration_setScreenRound :: proc "c" (config: ^AConfiguration, screenRound: i32) ---
*
* Set the current screen round in the configuration.
AConfiguration_setScreenSize ¶
AConfiguration_setScreenSize :: proc "c" (config: ^AConfiguration, screenSize: i32) ---
*
* Set the current screen size in the configuration.
AConfiguration_setScreenWidthDp ¶
AConfiguration_setScreenWidthDp :: proc "c" (config: ^AConfiguration, value: i32) ---
*
* Set the configuration's current screen width in dp units.
AConfiguration_setSdkVersion ¶
AConfiguration_setSdkVersion :: proc "c" (config: ^AConfiguration, sdkVersion: i32) ---
*
* Set the current SDK version in the configuration.
AConfiguration_setSmallestScreenWidthDp ¶
AConfiguration_setSmallestScreenWidthDp :: proc "c" (config: ^AConfiguration, value: i32) ---
*
* Set the configuration's smallest screen width in dp units.
AConfiguration_setTouchscreen ¶
AConfiguration_setTouchscreen :: proc "c" (config: ^AConfiguration, touchscreen: i32) ---
*
* Set the current touchscreen in the configuration.
AConfiguration_setUiModeNight ¶
AConfiguration_setUiModeNight :: proc "c" (config: ^AConfiguration, uiModeNight: i32) ---
*
* Set the current UI mode night in the configuration.
AConfiguration_setUiModeType ¶
AConfiguration_setUiModeType :: proc "c" (config: ^AConfiguration, uiModeType: i32) ---
*
* Set the current UI mode type in the configuration.
AFileDescriptor_create ¶
AFileDescriptor_create :: proc "c" (env: ^^JNINativeInterface) -> jobject ---
*
* Returns a new java.io.FileDescriptor. * * The FileDescriptor created represents an invalid Unix file descriptor (represented by * a file descriptor value of -1). * * Callers of this method should be aware that it can fail, returning NULL with a pending Java * exception. * * Available since API level 31. * * \param env a pointer to the JNI Native Interface of the current thread. * \return a java.io.FileDescriptor on success, nullptr if insufficient heap memory is available.
AFileDescriptor_getFd ¶
AFileDescriptor_getFd :: proc "c" (env: ^^JNINativeInterface, fileDescriptor: jobject) -> i32 ---
*
* Returns the Unix file descriptor represented by the given java.io.FileDescriptor. * * A return value of -1 indicates that \a fileDescriptor represents an invalid file descriptor. * * Aborts the program if \a fileDescriptor is not a java.io.FileDescriptor instance. * * Available since API level 31. * * \param env a pointer to the JNI Native Interface of the current thread. * \param fileDescriptor a java.io.FileDescriptor instance. * \return the Unix file descriptor wrapped by \a fileDescriptor.
AFileDescriptor_setFd ¶
AFileDescriptor_setFd :: proc "c" (env: ^^JNINativeInterface, fileDescriptor: jobject, fd: i32) ---
*
* Sets the Unix file descriptor represented by the given java.io.FileDescriptor. * * This function performs no validation of the Unix file descriptor argument, \a fd. Android uses * the value -1 to represent an invalid file descriptor, all other values are considered valid. * The validity of a file descriptor can be checked with FileDescriptor#valid(). * * Aborts the program if \a fileDescriptor is not a java.io.FileDescriptor instance. * * Available since API level 31. * * \param env a pointer to the JNI Native Interface of the current thread. * \param fileDescriptor a java.io.FileDescriptor instance. * \param fd a Unix file descriptor that \a fileDescriptor will subsequently represent.
AFontMatcher_create ¶
AFontMatcher_create :: proc "c" () -> ^AFontMatcher ---
*
* Creates a new AFontMatcher object. * * Available since API level 29.
AFontMatcher_destroy ¶
AFontMatcher_destroy :: proc "c" (matcher: ^AFontMatcher) ---
*
* Destroy the matcher object. * * Available since API level 29. * * \param matcher a matcher object. Passing NULL is not allowed.
AFontMatcher_match ¶
AFontMatcher_match :: proc "c" (matcher: ^AFontMatcher, familyName: cstring, text: [^]u16, textLength: u32, runLengthOut: ^u32) -> ^AFont ---
*
* Performs the matching from the generic font family for the text and select one font. * * For more information about generic font families, read [W3C spec](https://www.w3.org/TR/css-fonts-4/#generic-font-families) * * Even if no font can render the given text, this function will return a non-null result for * drawing Tofu character. * * Available since API level 29. * * \param matcher a matcher object. Passing NULL is not allowed. * \param familyName a null character terminated font family name * \param text a UTF-16 encoded text buffer to be rendered. Do not pass empty string. * \param textLength a length of the given text buffer. This must not be zero. * \param runLengthOut if not null, the font run length will be filled. * \return a font to be used for given text and params. You need to release the returned font by * AFont_close when it is no longer needed.
AFontMatcher_setFamilyVariant ¶
AFontMatcher_setFamilyVariant :: proc "c" (matcher: ^AFontMatcher, familyVariant: FamilyVariant) ---
*
* Set family variant to matcher.
*
* If this function is not called, the matcher performs with {@link AFAMILY_VARIANT_DEFAULT}.
*
* Available since API level 29.
*
* \param matcher a matcher object. Passing NULL is not allowed.
* \param familyVariant Must be one of {@link AFAMILY_VARIANT_DEFAULT},
* {@link AFAMILY_VARIANT_COMPACT} or {@link AFAMILY_VARIANT_ELEGANT} is valid.
AFontMatcher_setLocales ¶
AFontMatcher_setLocales :: proc "c" (matcher: ^AFontMatcher, languageTags: cstring) ---
*
* Set font locales to matcher. * * If this function is not called, the matcher performs with empty locale list. * * Available since API level 29. * * \param matcher a matcher object. Passing NULL is not allowed. * \param languageTags a null character terminated comma separated IETF BCP47 compliant language * tags.
AFontMatcher_setStyle ¶
AFontMatcher_setStyle :: proc "c" (matcher: ^AFontMatcher, weight: FontWeight, italic: bool) ---
*
* Set font style to matcher.
*
* If this function is not called, the matcher performs with {@link AFONT_WEIGHT_NORMAL}
* with non-italic style.
*
* Available since API level 29.
*
* \param matcher a matcher object. Passing NULL is not allowed.
* \param weight a font weight value. Only from 0 to 1000 value is valid
* \param italic true if italic, otherwise false.
AFont_close ¶
AFont_close :: proc "c" (font: ^AFont) ---
*
* Close an AFont. * * Available since API level 29. * * \param font a font returned by ASystemFontIterator_next or AFontMatchert_match. * Do nothing if NULL is passed.
AFont_getAxisCount ¶
*
* Return a count of font variation settings associated with the current font
*
* The font variation settings are provided as multiple tag-values pairs.
*
* For example, bold italic font may have following font variation settings:
* 'wght' 700, 'slnt' -12
* In this case, AFont_getAxisCount returns 2 and AFont_getAxisTag
* and AFont_getAxisValue will return following values.
* \code{.cpp}
* AFont* font = ASystemFontIterator_next(ite)
*
* // Returns the number of axes
* AFont_getAxisCount(font) // Returns 2
*
* // Returns the tag-value pair for the first axis.
* AFont_getAxisTag(font, 0) // Returns 'wght'(0x77676874)
* AFont_getAxisValue(font, 0) // Returns 700.0
*
* // Returns the tag-value pair for the second axis.
* AFont_getAxisTag(font, 1) // Returns 'slnt'(0x736c6e74)
* AFont_getAxisValue(font, 1) // Returns -12.0
* \endcode
*
* For more information about font variation settings, read [Font Variations Table](https://docs.microsoft.com/en-us/typography/opentype/spec/fvar)
*
* Available since API level 29.
*
* \param font a font object. Passing NULL is not allowed.
* \return a number of font variation settings.
AFont_getAxisTag ¶
*
* Return an OpenType axis tag associated with the current font.
*
* See AFont_getAxisCount for more details.
*
* Available since API level 29.
*
* \param font a font object. Passing NULL is not allowed.
* \param axisIndex an index to the font variation settings. Passing value larger than or
* equal to {@link AFont_getAxisCount} is not allowed.
* \return an OpenType axis tag value for the given font variation setting.
AFont_getAxisValue ¶
*
* Return an OpenType axis value associated with the current font.
*
* See AFont_getAxisCount for more details.
*
* Available since API level 29.
*
* \param font a font object. Passing NULL is not allowed.
* \param axisIndex an index to the font variation settings. Passing value larger than or
* equal to {@link AFont_getAxisCount} is not allowed.
* \return a float value for the given font variation setting.
AFont_getCollectionIndex ¶
*
* Return a font collection index value associated with the current font. * * In case the target font file is a font collection (e.g. .ttc or .otc), this * returns a non-negative value as an font offset in the collection. This * always returns 0 if the target font file is a regular font. * * Available since API level 29. * * \param font a font object. Passing NULL is not allowed. * \return a font collection index.
AFont_getFontFilePath ¶
*
* Return an absolute path to the current font file. * * Here is a list of font formats returned by this method: * <ul> * <li>OpenType</li> * <li>OpenType Font Collection</li> * <li>TrueType</li> * <li>TrueType Collection</li> * </ul> * The file extension could be one of *.otf, *.ttf, *.otc or *.ttc. * * The font file returned is guaranteed to be opend with O_RDONLY. * Note that the returned pointer is valid until AFont_close() is called for the given font. * * Available since API level 29. * * \param font a font object. Passing NULL is not allowed. * \return a string of the font file path.
AFont_getLocale ¶
*
* Return a IETF BCP47 compliant language tag associated with the current font. * * For information about IETF BCP47, read [Locale.forLanguageTag(java.lang.String)](https://developer.android.com/reference/java/util/Locale.html#forLanguageTag(java.lang.String)") * * Note that the returned pointer is valid until AFont_close() is called. * * Available since API level 29. * * \param font a font object. Passing NULL is not allowed. * \return a IETF BCP47 compliant language tag or nullptr if not available.
AFont_getWeight ¶
AFont_getWeight :: proc "c" (font: ^AFont) -> FontWeight ---
*
* Return a weight value associated with the current font.
*
* The weight values are positive and less than or equal to 1000.
* Here are pairs of the common names and their values.
* <p>
* <table>
* <tr>
* <th align="center">Value</th>
* <th align="center">Name</th>
* <th align="center">NDK Definition</th>
* </tr>
* <tr>
* <td align="center">100</td>
* <td align="center">Thin</td>
* <td align="center">{@link AFONT_WEIGHT_THIN}</td>
* </tr>
* <tr>
* <td align="center">200</td>
* <td align="center">Extra Light (Ultra Light)</td>
* <td align="center">{@link AFONT_WEIGHT_EXTRA_LIGHT}</td>
* </tr>
* <tr>
* <td align="center">300</td>
* <td align="center">Light</td>
* <td align="center">{@link AFONT_WEIGHT_LIGHT}</td>
* </tr>
* <tr>
* <td align="center">400</td>
* <td align="center">Normal (Regular)</td>
* <td align="center">{@link AFONT_WEIGHT_NORMAL}</td>
* </tr>
* <tr>
* <td align="center">500</td>
* <td align="center">Medium</td>
* <td align="center">{@link AFONT_WEIGHT_MEDIUM}</td>
* </tr>
* <tr>
* <td align="center">600</td>
* <td align="center">Semi Bold (Demi Bold)</td>
* <td align="center">{@link AFONT_WEIGHT_SEMI_BOLD}</td>
* </tr>
* <tr>
* <td align="center">700</td>
* <td align="center">Bold</td>
* <td align="center">{@link AFONT_WEIGHT_BOLD}</td>
* </tr>
* <tr>
* <td align="center">800</td>
* <td align="center">Extra Bold (Ultra Bold)</td>
* <td align="center">{@link AFONT_WEIGHT_EXTRA_BOLD}</td>
* </tr>
* <tr>
* <td align="center">900</td>
* <td align="center">Black (Heavy)</td>
* <td align="center">{@link AFONT_WEIGHT_BLACK}</td>
* </tr>
* </table>
* </p>
* Note that the weight value may fall in between above values, e.g. 250 weight.
*
* For more information about font weight, read [OpenType usWeightClass](https://docs.microsoft.com/en-us/typography/opentype/spec/os2#usweightclass)
*
* Available since API level 29.
*
* \param font a font object. Passing NULL is not allowed.
* \return a positive integer less than or equal to {@link AFONT_WEIGHT_MAX} is returned.
AFont_isItalic ¶
*
* Return true if the current font is italic, otherwise returns false. * * Available since API level 29. * * \param font a font object. Passing NULL is not allowed. * \return true if italic, otherwise false.
AHardwareBuffer_acquire ¶
AHardwareBuffer_acquire :: proc "c" (buffer: ^AHardwareBuffer) ---
*
* Acquire a reference on the given AHardwareBuffer object. * * This prevents the object from being deleted until the last reference * is removed. * * Available since API level 26.
AHardwareBuffer_allocate ¶
AHardwareBuffer_allocate :: proc "c" (desc: ^AHardwareBuffer_Desc, outBuffer: ^^AHardwareBuffer) -> i32 ---
*
* Allocates a buffer that matches the passed AHardwareBuffer_Desc. * * If allocation succeeds, the buffer can be used according to the * usage flags specified in its description. If a buffer is used in ways * not compatible with its usage flags, the results are undefined and * may include program termination. * * Available since API level 26. * * \return 0 on success, or an error number of the allocation fails for * any reason. The returned buffer has a reference count of 1.
AHardwareBuffer_describe ¶
AHardwareBuffer_describe :: proc "c" (buffer: ^AHardwareBuffer, outDesc: ^AHardwareBuffer_Desc) ---
*
* Return a description of the AHardwareBuffer in the passed * AHardwareBuffer_Desc struct. * * Available since API level 26.
AHardwareBuffer_fromHardwareBuffer ¶
AHardwareBuffer_fromHardwareBuffer :: proc "c" (env: ^^JNINativeInterface, hardwareBufferObj: jobject) -> ^AHardwareBuffer ---
*
* Return the AHardwareBuffer wrapped by a Java HardwareBuffer object. * * This method does not acquire any additional reference to the AHardwareBuffer * that is returned. To keep the AHardwareBuffer alive after the Java * HardwareBuffer object is closed, explicitly or by the garbage collector, be * sure to use AHardwareBuffer_acquire() to acquire an additional reference. * * Available since API level 26.
AHardwareBuffer_getId ¶
AHardwareBuffer_getId :: proc "c" (buffer: ^AHardwareBuffer, outId: ^u64) -> i32 ---
*
* Get the system wide unique id for an AHardwareBuffer. * * Available since API level 31. * * \return 0 on success, -EINVAL if \a buffer or \a outId is NULL, or an error number if the * operation fails for any reason.
AHardwareBuffer_isSupported ¶
AHardwareBuffer_isSupported :: proc "c" (desc: ^AHardwareBuffer_Desc) -> i32 ---
*
* Test whether the given format and usage flag combination is * allocatable. * * If this function returns true, it means that a buffer with the given * description can be allocated on this implementation, unless resource * exhaustion occurs. If this function returns false, it means that the * allocation of the given description will never succeed. * * The return value of this function may depend on all fields in the * description, except stride, which is always ignored. For example, * some implementations have implementation-defined limits on texture * size and layer count. * * Available since API level 29. * * \return 1 if the format and usage flag combination is allocatable, * 0 otherwise.
AHardwareBuffer_lock ¶
AHardwareBuffer_lock :: proc "c" (buffer: ^AHardwareBuffer, usage: AHardwareBuffer_UsageFlags, fence: i32, rect: ^ARect, outVirtualAddress: ^rawptr) -> i32 ---
*
* Lock the AHardwareBuffer for direct CPU access. * * This function can lock the buffer for either reading or writing. * It may block if the hardware needs to finish rendering, if CPU caches * need to be synchronized, or possibly for other implementation- * specific reasons. * * The passed AHardwareBuffer must have one layer, otherwise the call * will fail. * * If \a fence is not negative, it specifies a fence file descriptor on * which to wait before locking the buffer. If it's negative, the caller * is responsible for ensuring that writes to the buffer have completed * before calling this function. Using this parameter is more efficient * than waiting on the fence and then calling this function. * * The \a usage parameter may only specify AHARDWAREBUFFER_USAGE_CPU_*. * If set, then outVirtualAddress is filled with the address of the * buffer in virtual memory. The flags must also be compatible with * usage flags specified at buffer creation: if a read flag is passed, * the buffer must have been created with * AHARDWAREBUFFER_USAGE_CPU_READ_RARELY or * AHARDWAREBUFFER_USAGE_CPU_READ_OFTEN. If a write flag is passed, it * must have been created with AHARDWAREBUFFER_USAGE_CPU_WRITE_RARELY or * AHARDWAREBUFFER_USAGE_CPU_WRITE_OFTEN. * * If \a rect is not NULL, the caller promises to modify only data in * the area specified by rect. If rect is NULL, the caller may modify * the contents of the entire buffer. The content of the buffer outside * of the specified rect is NOT modified by this call. * * It is legal for several different threads to lock a buffer for read * access none of the threads are blocked. * * Locking a buffer simultaneously for write or read/write is undefined, * but will neither terminate the process nor block the caller. * AHardwareBuffer_lock may return an error or leave the buffer's * content in an indeterminate state. * * If the buffer has AHARDWAREBUFFER_FORMAT_BLOB, it is legal lock it * for reading and writing in multiple threads and/or processes * simultaneously, and the contents of the buffer behave like shared * memory. * * Available since API level 26. * * \return 0 on success. -EINVAL if \a buffer is NULL, the usage flags * are not a combination of AHARDWAREBUFFER_USAGE_CPU_*, or the buffer * has more than one layer. Error number if the lock fails for any other * reason.
AHardwareBuffer_lockAndGetInfo ¶
AHardwareBuffer_lockAndGetInfo :: proc "c" ( buffer: ^AHardwareBuffer, usage: AHardwareBuffer_UsageFlags, fence: i32, rect: ^ARect, outVirtualAddress: ^rawptr, outBytesPerPixel: ^i32, outBytesPerStride: ^i32, ) -> i32 ---
*
* Lock an AHardwareBuffer for direct CPU access. * * This function is the same as the above lock function, but passes back * additional information about the bytes per pixel and the bytes per stride * of the locked buffer. If the bytes per pixel or bytes per stride are unknown * or variable, or if the underlying mapper implementation does not support returning * additional information, then this call will fail with INVALID_OPERATION * * Available since API level 29.
AHardwareBuffer_lockPlanes ¶
AHardwareBuffer_lockPlanes :: proc "c" (buffer: ^AHardwareBuffer, usage: AHardwareBuffer_UsageFlags, fence: i32, rect: ^ARect, outPlanes: ^AHardwareBuffer_Planes) -> i32 ---
*
* Lock a potentially multi-planar AHardwareBuffer for direct CPU access. * * This function is similar to AHardwareBuffer_lock, but can lock multi-planar * formats. The locked planes are returned in the \a outPlanes argument. Note, * that multi-planar should not be confused with multi-layer images, which this * locking function does not support. * * YUV formats are always represented by three separate planes of data, one for * each color plane. The order of planes in the array is guaranteed such that * plane #0 is always Y, plane #1 is always U (Cb), and plane #2 is always V * (Cr). All other formats are represented by a single plane. * * Additional information always accompanies the buffers, describing the row * stride and the pixel stride for each plane. * * In case the buffer cannot be locked, \a outPlanes will contain zero planes. * * See the AHardwareBuffer_lock documentation for all other locking semantics. * * Available since API level 29. * * \return 0 on success. -EINVAL if \a buffer is NULL, the usage flags * are not a combination of AHARDWAREBUFFER_USAGE_CPU_*, or the buffer * has more than one layer. Error number if the lock fails for any other * reason.
AHardwareBuffer_recvHandleFromUnixSocket ¶
AHardwareBuffer_recvHandleFromUnixSocket :: proc "c" (socketFd: i32, outBuffer: ^^AHardwareBuffer) -> i32 ---
*
* Receive an AHardwareBuffer from an AF_UNIX socket. * * Available since API level 26. * * \return 0 on success, -EINVAL if \a outBuffer is NULL, or an error * number if the operation fails for any reason.
AHardwareBuffer_release ¶
AHardwareBuffer_release :: proc "c" (buffer: ^AHardwareBuffer) ---
*
* Remove a reference that was previously acquired with * AHardwareBuffer_acquire() or AHardwareBuffer_allocate(). * * Available since API level 26.
AHardwareBuffer_sendHandleToUnixSocket ¶
AHardwareBuffer_sendHandleToUnixSocket :: proc "c" (buffer: ^AHardwareBuffer, socketFd: i32) -> i32 ---
*
* Send the AHardwareBuffer to an AF_UNIX socket. * * Available since API level 26. * * \return 0 on success, -EINVAL if \a buffer is NULL, or an error * number if the operation fails for any reason.
AHardwareBuffer_toHardwareBuffer ¶
AHardwareBuffer_toHardwareBuffer :: proc "c" (env: ^^JNINativeInterface, hardwareBuffer: ^AHardwareBuffer) -> jobject ---
*
* Return a new Java HardwareBuffer object that wraps the passed native * AHardwareBuffer object. The Java HardwareBuffer will acquire a reference to * the internal buffer and manage its lifetime. For example: * * <pre><code> * AHardwareBuffer* buffer * AHardwareBuffer_allocate(..., &buffer) // `buffer` has reference count 1 * jobject java_result = AHardwareBuffer_toHardwareBuffer(buffer) // `buffer` has reference count 2. * AHardwareBuffer_release(buffer) // `buffer` has reference count 1 * return result // The underlying buffer is kept alive by `java_result` and * // will be set to 0 when it is closed on the Java side with * // HardwareBuffer::close(). * </code></pre> * * Available since API level 26.
AHardwareBuffer_unlock ¶
AHardwareBuffer_unlock :: proc "c" (buffer: ^AHardwareBuffer, fence: ^i32) -> i32 ---
*
* Unlock the AHardwareBuffer from direct CPU access. * * Must be called after all changes to the buffer are completed by the * caller. If \a fence is NULL, the function will block until all work * is completed. Otherwise, \a fence will be set either to a valid file * descriptor or to -1. The file descriptor will become signaled once * the unlocking is complete and buffer contents are updated. * The caller is responsible for closing the file descriptor once it's * no longer needed. The value -1 indicates that unlocking has already * completed before the function returned and no further operations are * necessary. * * Available since API level 26. * * \return 0 on success. -EINVAL if \a buffer is NULL. Error number if * the unlock fails for any reason.
AImageDecoderFrameInfo_create ¶
AImageDecoderFrameInfo_create :: proc "c" () -> ^AImageDecoderFrameInfo ---
*
* Create an uninitialized AImageDecoderFrameInfo.
*
* Introduced in API 31.
*
* This can be passed to {@link AImageDecoder_getFrameInfo} to fill
* in information about the current frame. It may be reused.
*
* Must be deleted with {@link AImageDecoderFrameInfo_delete}.
AImageDecoderFrameInfo_delete ¶
AImageDecoderFrameInfo_delete :: proc "c" (info: ^AImageDecoderFrameInfo) ---
*
* Delete an AImageDecoderFrameInfo. * * Introduced in API 31.
AImageDecoderFrameInfo_getBlendOp ¶
AImageDecoderFrameInfo_getBlendOp :: proc "c" (info: ^AImageDecoderFrameInfo) -> i32 ---
*
* Return how this frame is blended with the previous frame.
*
* Introduced in API 31.
*
* This, along with other information in AImageDecoderFrameInfo,
* can be useful for determining whether a frame is independent, but
* the decoder handles blending frames, so a simple
* sequential client does not need this.
*
* @return one of:
* - {@link ANDROID_IMAGE_DECODER_BLEND_OP_SRC}
* - {@link ANDROID_IMAGE_DECODER_BLEND_OP_SRC_OVER}
* Errors:
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER} if |info| is null.
AImageDecoderFrameInfo_getDisposeOp ¶
AImageDecoderFrameInfo_getDisposeOp :: proc "c" (info: ^AImageDecoderFrameInfo) -> i32 ---
*
* Return how this frame is “disposed” before showing the next one.
*
* Introduced in API 31.
*
* This, along with other information in AImageDecoderFrameInfo,
* can be useful for determining whether a frame is independent, but
* the decoder handles disposing of frames, so a simple
* sequential client does not need this.
*
* @return one of:
* - {@link ANDROID_IMAGE_DECODER_DISPOSE_OP_NONE}
* - {@link ANDROID_IMAGE_DECODER_DISPOSE_OP_BACKGROUND}
* - {@link ANDROID_IMAGE_DECODER_DISPOSE_OP_PREVIOUS}
* Errors:
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER} if |info| is null.
AImageDecoderFrameInfo_getDuration ¶
AImageDecoderFrameInfo_getDuration :: proc "c" (info: ^AImageDecoderFrameInfo) -> i64 ---
*
* Report the number of nanoseconds to show the current frame.
*
* Introduced in API 31.
*
* Errors:
* - returns {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER} if |info| is null.
AImageDecoderFrameInfo_getFrameRect ¶
AImageDecoderFrameInfo_getFrameRect :: proc "c" (info: ^AImageDecoderFrameInfo) -> ARect ---
*
* The rectangle of the image (within 0, 0,
* {@link AImageDecoderHeaderInfo_getWidth}, {@link AImageDecoderHeaderInfo_getHeight})
* updated by this frame.
*
* Introduced in API 31.
*
* Note that this is unaffected by calls to
* {@link AImageDecoder_setTargetSize} or
* {@link AImageDecoder_setCrop}.
*
* A frame may update only part of the image. This will always be
* contained by the image’s dimensions.
*
* This, along with other information in AImageDecoderFrameInfo,
* can be useful for determining whether a frame is independent, but
* the decoder handles blending frames, so a simple
* sequential client does not need this.
*
* Errors:
* - returns an empty ARect if |info| is null.
AImageDecoderFrameInfo_hasAlphaWithinBounds ¶
AImageDecoderFrameInfo_hasAlphaWithinBounds :: proc "c" (info: ^AImageDecoderFrameInfo) -> bool ---
*
* Whether the new portion of this frame may contain alpha.
*
* Introduced in API 31.
*
* Unless this frame is independent (see {@link AImageDecoder_decodeImage}),
* a single call to {@link AImageDecoder_decodeImage} will decode an updated
* rectangle of pixels and then blend it with the existing pixels in the
* |pixels| buffer according to {@link AImageDecoderFrameInfo_getBlendOp}. This
* method returns whether the updated rectangle has alpha, prior to blending.
* The return value is conservative for example, if a color-index-based frame
* has a color with alpha but does not use it, this will still return true.
*
* This, along with other information in AImageDecoderFrameInfo,
* can be useful for determining whether a frame is independent, but
* the decoder handles blending frames, so a simple
* sequential client does not need this.
*
* Note that this may differ from whether the composed frame (that is, the
* resulting image after blending) has alpha. If this frame does not fill the
* entire image dimensions (see {@link AImageDecoderFrameInfo_getFrameRect})
* or it blends with an opaque frame, for example, the composed frame’s alpha
* may not match.
*
* Errors:
* - returns false if |info| is null.
AImageDecoderHeaderInfo_getAlphaFlags ¶
AImageDecoderHeaderInfo_getAlphaFlags :: proc "c" (header_info: ^AImageDecoderHeaderInfo) -> AndroidBitmapFlagsAlpha ---
*
* Report how the {@link AImageDecoder} will handle alpha by default. If the image
* contains no alpha (according to its header), this will return
* {@link ANDROID_BITMAP_FLAGS_ALPHA_OPAQUE}. If the image may contain alpha,
* this returns {@link ANDROID_BITMAP_FLAGS_ALPHA_PREMUL}, because
* {@link AImageDecoder_decodeImage} will premultiply pixels by default.
*
* Available since API level 30.
*
* Starting in API level 31, an AImageDecoder may contain multiple frames of an
* animation, but this method still only reports whether the first frame has
* alpha.
AImageDecoderHeaderInfo_getAndroidBitmapFormat ¶
AImageDecoderHeaderInfo_getAndroidBitmapFormat :: proc "c" (header_info: ^AImageDecoderHeaderInfo) -> AndroidBitmapFormat ---
*
* Report the {@link AndroidBitmapFormat} the AImageDecoder will decode to
* by default. {@link AImageDecoder} will try to choose one that is sensible
* for the image and the system. Note that this does not indicate the
* encoded format of the image.
*
* Available since API level 30.
AImageDecoderHeaderInfo_getDataSpace ¶
AImageDecoderHeaderInfo_getDataSpace :: proc "c" (header_info: ^AImageDecoderHeaderInfo) -> ADataSpace ---
*
* Report the dataspace the AImageDecoder will decode to by default.
*
* By default, {@link AImageDecoder_decodeImage} will not do any color
* conversion.
*
* Available since API level 30.
*
* @return The {@link ADataSpace} representing the way the colors
* are encoded (or {@link ADATASPACE_UNKNOWN} if there is not a
* corresponding ADataSpace). This specifies how to interpret the colors
* in the decoded image, unless {@link AImageDecoder_setDataSpace} is
* called to decode to a different ADataSpace.
*
* Note that ADataSpace only exposes a few values. This may return
* {@link ADATASPACE_UNKNOWN}, even for Named ColorSpaces, if they have
* no corresponding {@link ADataSpace}.
AImageDecoderHeaderInfo_getHeight ¶
AImageDecoderHeaderInfo_getHeight :: proc "c" (header_info: ^AImageDecoderHeaderInfo) -> i32 ---
*
* Report the native height of the encoded image. This is also the logical
* pixel height of the output, unless {@link AImageDecoder_setTargetSize} is
* used to choose a different size or {@link AImageDecoder_setCrop} is used to
* set a crop rect.
*
* Available since API level 30.
AImageDecoderHeaderInfo_getMimeType ¶
AImageDecoderHeaderInfo_getMimeType :: proc "c" (header_info: ^AImageDecoderHeaderInfo) -> cstring ---
*
* Report the mimeType of the encoded image. * * Available since API level 30. * * @return a string literal describing the mime type.
AImageDecoderHeaderInfo_getWidth ¶
AImageDecoderHeaderInfo_getWidth :: proc "c" (header_info: ^AImageDecoderHeaderInfo) -> i32 ---
*
* Report the native width of the encoded image. This is also the logical
* pixel width of the output, unless {@link AImageDecoder_setTargetSize} is
* used to choose a different size or {@link AImageDecoder_setCrop} is used to
* set a crop rect.
*
* Available since API level 30.
AImageDecoder_advanceFrame ¶
AImageDecoder_advanceFrame :: proc "c" (decoder: ^AImageDecoder) -> AImageDecoderResult ---
*
* Advance to the next frame in the animation.
*
* Introduced in API 31.
*
* The AImageDecoder keeps track internally which frame it is ready to decode
* (the "current frame"). Initially it is set to decode the first frame, and
* each call to {@link AImageDecoder_decodeImage} will continue to decode
* the same frame until this method (or {@link AImageDecoder_rewind})
* is called.
*
* Note that this can be used to skip a frame without decoding it. But
* some frames depend on (i.e. blend with) prior frames, and
* AImageDecoder_decodeImage assumes that the prior frame is in the
* |pixels| buffer. In addition, AImageDecoder_decodeImage handles caching and
* restoring frames (see {@link ANDROID_IMAGE_DECODER_DISPOSE_OP_PREVIOUS}), so
* skipping frames in an image with such frames may not produce the correct
* results.
*
* Only supported by {@link ANDROID_BITMAP_FORMAT_RGBA_8888} and
* {@link ANDROID_BITMAP_FORMAT_RGBA_F16}.
*
* @param decoder an {@link AImageDecoder} object.
* @return {@link ANDROID_IMAGE_DECODER_SUCCESS} on success or a value
* indicating the reason for the failure.
*
* Errors:
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}: The AImageDecoder
* represents an image that is not animated (see
* {@link AImageDecoder_isAnimated}) or the AImageDecoder is null.
* - {@link ANDROID_IMAGE_DECODER_INVALID_STATE): The requested
* {@link AndroidBitmapFormat} does not support animation.
* - {@link ANDROID_IMAGE_DECODER_INCOMPLETE}: The input appears
* to be truncated. The client must call {@link AImageDecoder_rewind}
* before calling {@link AImageDecoder_decodeImage} again.
* - {@link ANDROID_IMAGE_DECODER_ERROR}: The input contains an error.
* The client must call {@link AImageDecoder_rewind} before
* calling {@link AImageDecoder_decodeImage} again.
* - {@link ANDROID_IMAGE_DECODER_FINISHED}: The input contains no
* more frames. The client must call {@link AImageDecoder_rewind}
* before calling {@link AImageDecoder_decodeImage} again.
AImageDecoder_computeSampledSize ¶
AImageDecoder_computeSampledSize :: proc "c" (decoder: ^AImageDecoder, sampleSize: i32, width: ^i32, height: ^i32) -> AImageDecoderResult ---
*
* Compute the dimensions to use for a given sampleSize.
*
* Although AImageDecoder can scale to an arbitrary target size (see
* {@link AImageDecoder_setTargetSize}), some sizes may be more efficient than
* others. This computes the most efficient target size to use to reach a
* particular sampleSize.
*
* Available since API level 30.
*
* @param decoder an {@link AImageDecoder} object.
* @param sampleSize A subsampling rate of the original image. Must be greater
* than or equal to 1. A sampleSize of 2 means to skip every
* other pixel/line, resulting in a width and height that are
* 1/2 of the original dimensions, with 1/4 the number of
* pixels.
* @param width Out parameter for the width sampled by sampleSize, and rounded
* in the direction that the decoder can do most efficiently.
* @param height Out parameter for the height sampled by sampleSize, and rounded
* in the direction that the decoder can do most efficiently.
* @return {@link ANDROID_IMAGE_DECODER_SUCCESS} on success or a value
* indicating the reason for the failure.
*
* Errors:
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}: The
* {@link AImageDecoder}, |width| or |height| is null or |sampleSize| is < 1.
AImageDecoder_createFromAAsset ¶
AImageDecoder_createFromAAsset :: proc "c" (asset: ^AAsset, outDecoder: ^^AImageDecoder) -> AImageDecoderResult ---
*
* Create a new {@link AImageDecoder} from an {@link AAsset}.
*
* Available since API level 30.
*
* @param asset {@link AAsset} containing encoded image data. Client is still
* responsible for calling {@link AAsset_close} on it, which may be
* done after deleting the returned {@link AImageDecoder}.
* @param outDecoder On success (i.e. return value is
* {@link ANDROID_IMAGE_DECODER_SUCCESS}), this will be set to
* a newly created {@link AImageDecoder}. Caller is
* responsible for calling {@link AImageDecoder_delete} on it.
* @return {@link ANDROID_IMAGE_DECODER_SUCCESS} on success or a value
* indicating the reason for the failure.
*
* Errors:
* - {@link ANDROID_IMAGE_DECODER_INCOMPLETE}: The asset was truncated before
* reading the image header.
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}: One of the parameters is
* null.
* - {@link ANDROID_IMAGE_DECODER_INVALID_INPUT}: There is an error in the
* header.
* - {@link ANDROID_IMAGE_DECODER_SEEK_ERROR}: The asset failed to seek.
* - {@link ANDROID_IMAGE_DECODER_INTERNAL_ERROR}: Some other error, like a
* failure to allocate memory.
* - {@link ANDROID_IMAGE_DECODER_UNSUPPORTED_FORMAT}: The format is not
* supported.
AImageDecoder_createFromBuffer ¶
AImageDecoder_createFromBuffer :: proc "c" (buffer: rawptr, length: uint, outDecoder: ^^AImageDecoder) -> AImageDecoderResult ---
*
* Create a new AImageDecoder from a buffer.
*
* Available since API level 30.
*
* @param buffer Pointer to encoded data. Must be valid for the entire time
* the {@link AImageDecoder} is used.
* @param length Byte length of buffer.
* @param outDecoder On success (i.e. return value is
* {@link ANDROID_IMAGE_DECODER_SUCCESS}), this will be set to
* a newly created {@link AImageDecoder}. Caller is
* responsible for calling {@link AImageDecoder_delete} on it.
* @return {@link ANDROID_IMAGE_DECODER_SUCCESS} on success or a value
* indicating the reason for the failure.
*
* Errors:
* - {@link ANDROID_IMAGE_DECODER_INCOMPLETE}: The encoded image was truncated before
* reading the image header.
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}: One of the parameters is
* invalid.
* - {@link ANDROID_IMAGE_DECODER_INVALID_INPUT}: There is an error in the
* header.
* - {@link ANDROID_IMAGE_DECODER_INTERNAL_ERROR}: Some other error, like a
* failure to allocate memory.
* - {@link ANDROID_IMAGE_DECODER_UNSUPPORTED_FORMAT}: The format is not
* supported.
AImageDecoder_createFromFd ¶
AImageDecoder_createFromFd :: proc "c" (fd: i32, outDecoder: ^^AImageDecoder) -> AImageDecoderResult ---
*
* Create a new {@link AImageDecoder} from a file descriptor.
*
* Available since API level 30.
*
* @param fd Seekable, readable, open file descriptor for encoded data.
* Client is still responsible for closing it, which may be done
* after deleting the returned {@link AImageDecoder}.
* @param outDecoder On success (i.e. return value is
* {@link ANDROID_IMAGE_DECODER_SUCCESS}), this will be set to
* a newly created {@link AImageDecoder}. Caller is
* responsible for calling {@link AImageDecoder_delete} on it.
* @return {@link ANDROID_IMAGE_DECODER_SUCCESS} on success or a value
* indicating the reason for the failure.
*
* Errors:
* - {@link ANDROID_IMAGE_DECODER_INCOMPLETE}: The file was truncated before
* reading the image header.
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}: The {@link AImageDecoder} is
* null, or |fd| does not represent a valid, seekable file descriptor.
* - {@link ANDROID_IMAGE_DECODER_INVALID_INPUT}: There is an error in the
* header.
* - {@link ANDROID_IMAGE_DECODER_SEEK_ERROR}: The descriptor failed to seek.
* - {@link ANDROID_IMAGE_DECODER_INTERNAL_ERROR}: Some other error, like a
* failure to allocate memory.
* - {@link ANDROID_IMAGE_DECODER_UNSUPPORTED_FORMAT}: The format is not
* supported.
AImageDecoder_decodeImage ¶
AImageDecoder_decodeImage :: proc "c" (decoder: ^AImageDecoder, pixels: rawptr, stride: uint, size: uint) -> AImageDecoderResult ---
*
* Decode the image into pixels, using the settings of the {@link AImageDecoder}.
*
* Available since API level 30.
*
* Starting in API level 31, it can be used to decode all of the frames of an
* animated image (i.e. GIF, WebP) using new APIs. Internally,
* AImageDecoder keeps track of its "current frame" - that is, the frame that
* will be decoded by a call to AImageDecoder_decodeImage. At creation time, the
* current frame is always the first frame, and multiple calls to this method
* will each decode the first frame. {@link AImageDecoder_advanceFrame} advances
* the current frame to the following frame, so that future calls to this method
* will decode that frame. Some frames may update only part of the image. They
* may only update a sub-rectangle (see {@link
* AImageDecoderFrameInfo_getFrameRect}), or they may have alpha (see
* {@link AImageDecoderFrameInfo_hasAlphaWithinBounds}). In these cases, this
* method assumes that the prior frame is still residing in the |pixels| buffer,
* decodes only the new portion, and blends it with the buffer. Frames that change
* the entire |pixels| buffer are "independent", and do not require the prior
* frame to remain in the buffer. The first frame is always independent. A
* sophisticated client can use information from the {@link AImageDecoderFrameInfo}
* to determine whether other frames are independent, or what frames they rely on.
*
* If the current frame is marked {@link ANDROID_IMAGE_DECODER_DISPOSE_OP_PREVIOUS},
* AImageDecoder_decodeImage will store the |pixels| buffer prior to decoding
* (note: this only happens for the first in a string of consecutive
* ANDROID_IMAGE_DECODER_DISPOSE_OP_PREVIOUS frames). After advancing to the
* following frame, AImageDecoder_decodeImage will restore that buffer prior to
* decoding that frame. This is the default behavior, but it can be disabled
* by passing false to {@link AImageDecoder_setInternallyHandleDisposePrevious}.
*
* Ignoring timing information, display, etc, a client wishing to decode all
* frames of an animated image may conceptually use code like the following:
*
* while (true) {
* int result = AImageDecoder_decodeImage(decoder, pixels, stride, size);
* if (result != ANDROID_IMAGE_DECODER_SUCCESS) break;
*
* // Display or save the image in |pixels|, keeping the buffer intact for
* // AImageDecoder to decode the next frame correctly.
* Application_viewImage(pixels);
*
* result = AImageDecoder_advanceFrame(decoder);
* if (result != ANDROID_IMAGE_DECODER_SUCCESS) break;
* }
*
* @param decoder Opaque object representing the decoder.
* @param pixels On success, will be filled with the result
* of the decode. Must be large enough to hold |size| bytes.
* @param stride Width in bytes of a single row. Must be at least
* {@link AImageDecoder_getMinimumStride} and a multiple of the
* bytes per pixel of the {@link AndroidBitmapFormat}.
* @param size Size of the pixel buffer in bytes. Must be at least
* stride * (height - 1) +
* {@link AImageDecoder_getMinimumStride}.
* @return {@link ANDROID_IMAGE_DECODER_SUCCESS} on success or a value
* indicating the reason for the failure.
*
* Errors:
* - {@link ANDROID_IMAGE_DECODER_INCOMPLETE}: The image was truncated. A
* partial image was decoded, and undecoded lines have been initialized to all
* zeroes.
* - {@link ANDROID_IMAGE_DECODER_ERROR}: The image contained an error. A
* partial image was decoded, and undecoded lines have been initialized to all
* zeroes.
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}: The {@link AImageDecoder} or
* |pixels| is null, the stride is not large enough or not pixel aligned, or
* |size| is not large enough.
* - {@link ANDROID_IMAGE_DECODER_SEEK_ERROR}: The asset or file descriptor
* failed to seek.
* - {@link ANDROID_IMAGE_DECODER_INTERNAL_ERROR}: Some other error, like a
* failure to allocate memory.
* - {@link ANDROID_IMAGE_DECODER_FINISHED}: The input contains no
* more frames. No decoding occurred. The client must call
* {@link AImageDecoder_rewind} before calling
* {@link AImageDecoder_decodeImage} again.
AImageDecoder_delete ¶
AImageDecoder_delete :: proc "c" (decoder: ^AImageDecoder) ---
*
* Delete the AImageDecoder.
* @param decoder {@link AImageDecoder} object created with one of AImageDecoder_createFrom...
* functions.
* Available since API level 30.
AImageDecoder_getFrameInfo ¶
AImageDecoder_getFrameInfo :: proc "c" (decoder: ^AImageDecoder, info: ^AImageDecoderFrameInfo) -> AImageDecoderResult ---
*
* Fill |info| with information about the current frame.
*
* Introduced in API 31.
*
* Initially, this will return information about the first frame.
* {@link AImageDecoder_advanceFrame} and
* {@link AImageDecoder_rewind} can be used to change which frame
* is the current frame.
*
* If the image only has one frame, this will fill the {@link
* AImageDecoderFrameInfo} with the encoded info and reasonable
* defaults.
*
* If {@link AImageDecoder_advanceFrame} succeeded, this will succeed as well.
*
* @param decoder Opaque object representing the decoder.
* @param info Opaque object to hold frame information. On success, will be
* filled with information regarding the current frame.
* @return {@link ANDROID_IMAGE_DECODER_SUCCESS} on success or a value
* indicating the reason for the failure.
*
* Errors:
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}: One of the parameters is null.
* - {@link ANDROID_IMAGE_DECODER_FINISHED}: The input contains no
* more frames. The client must call {@link AImageDecoder_rewind} to reset the
* current frame to a valid frame (0).
AImageDecoder_getHeaderInfo ¶
AImageDecoder_getHeaderInfo :: proc "c" (decoder: ^AImageDecoder) -> ^AImageDecoderHeaderInfo ---
*
* Return an opaque handle for reading header info.
*
* This is owned by the {@link AImageDecoder} and will be destroyed when the
* AImageDecoder is destroyed via {@link AImageDecoder_delete}.
*
* @param decoder an {@link AImageDecoder} object.
*
* Available since API level 30.
AImageDecoder_getMinimumStride ¶
AImageDecoder_getMinimumStride :: proc "c" (decoder: ^AImageDecoder) -> uint ---
*
* Return the minimum stride that can be used in
* {@link AImageDecoder_decodeImage}.
*
* This stride provides no padding, meaning it will be exactly equal to the
* width times the number of bytes per pixel for the {@link AndroidBitmapFormat}
* being used.
*
* If the output is scaled (via {@link AImageDecoder_setTargetSize}) and/or
* cropped (via {@link AImageDecoder_setCrop}), this takes those into account.
*
* @param decoder an {@link AImageDecoder} object.
*
* Available since API level 30.
AImageDecoder_getRepeatCount ¶
AImageDecoder_getRepeatCount :: proc "c" (decoder: ^AImageDecoder) -> i32 ---
*
* Report how many times the animation should repeat.
*
* Introduced in API 31.
*
* This does not include the first play through. e.g. a repeat
* count of 4 means that each frame is played 5 times.
*
* {@link ANDROID_IMAGE_DECODER_INFINITE} means to repeat forever.
*
* This may require seeking.
*
* For non-animated formats, this returns 0. It may return non-zero for
* an image with only one frame (i.e. {@link AImageDecoder_isAnimated} returns
* false) if the encoded image contains a repeat count.
*
* @param decoder an {@link AImageDecoder} object.
* @return Number of times to repeat on success or a value
* indicating the reason for the failure.
*
* Errors:
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}: The AImageDecoder
* is null.
AImageDecoder_isAnimated ¶
AImageDecoder_isAnimated :: proc "c" (decoder: ^AImageDecoder) -> bool ---
*
* Return true iff the image is animated - i.e. has multiple frames.
*
* Introduced in API 31.
*
* A single frame GIF is considered to *not* be animated. This may require
* seeking past the first frame to verify whether there is a following frame.
*
* @param decoder an {@link AImageDecoder} object.
*
* Errors:
* - returns false if |decoder| is null.
AImageDecoder_resultToString ¶
AImageDecoder_resultToString :: proc "c" (result: AImageDecoderResult) -> cstring ---
*
* Return a constant string value representing the error code.
*
* Introduced in API 31.
*
* Pass the return value from an {@link AImageDecoder} method (e.g.
* {@link AImageDecoder_decodeImage}) for a text string representing the error
* code.
*
* Errors:
* - Returns null for a value out of range.
AImageDecoder_rewind ¶
AImageDecoder_rewind :: proc "c" (decoder: ^AImageDecoder) -> AImageDecoderResult ---
*
* Return to the beginning of the animation.
*
* Introduced in API 31.
*
* After this call, the AImageDecoder will be ready to decode the
* first frame of the animation. This can be called after reaching
* the end of the animation or an error or in the middle of the
* animation.
*
* @param decoder an {@link AImageDecoder} object.
* @return {@link ANDROID_IMAGE_DECODER_SUCCESS} on success or a value
* indicating the reason for the failure.
*
* Errors:
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}: The AImageDecoder
* represents an image that is not animated (see
* {@link AImageDecoder_isAnimated}) or the AImageDecoder is
* null.
* - {@link ANDROID_IMAGE_DECODER_SEEK_ERROR}: The asset or file
* descriptor failed to seek.
AImageDecoder_setAndroidBitmapFormat ¶
AImageDecoder_setAndroidBitmapFormat :: proc "c" (decoder: ^AImageDecoder, format: AndroidBitmapFormat) -> AImageDecoderResult ---
*
* Choose the desired output format.
*
* If the encoded image represents an animation, this must be called while on
* the first frame (e.g. before calling {@link AImageDecoder_advanceFrame} or
* after calling {@link AImageDecoder_rewind}).
*
* Available since API level 30.
*
* @param format {@link AndroidBitmapFormat} to use for the output.
* @param decoder an {@link AImageDecoder} object.
* @return {@link ANDROID_IMAGE_DECODER_SUCCESS} on success or a value
* indicating the reason for the failure. On failure, the
* {@link AImageDecoder} uses the format it was already planning
* to use (either its default or a previously successful setting
* from this function).
*
* Errors:
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}: The
* {@link AImageDecoder} is null or |format| does not correspond to an
* {@link AndroidBitmapFormat}.
* - {@link ANDROID_IMAGE_DECODER_INVALID_CONVERSION}: The
* {@link AndroidBitmapFormat} is incompatible with the image.
* - {@link ANDROID_IMAGE_DECODER_INVALID_STATE}: The animation is not on
* the first frame.
AImageDecoder_setCrop ¶
AImageDecoder_setCrop :: proc "c" (decoder: ^AImageDecoder, crop: ARect) -> AImageDecoderResult ---
*
* Specify how to crop the output after scaling (if any).
*
* Future calls to {@link AImageDecoder_decodeImage} will crop their output to
* the specified {@link ARect}. Clients will only need to allocate enough memory
* for the cropped ARect.
*
* If the encoded image represents an animation, this must be called while on
* the first frame (e.g. before calling {@link AImageDecoder_advanceFrame} or
* after calling {@link AImageDecoder_rewind}).
*
* Available since API level 30.
*
* @param decoder an {@link AImageDecoder} object.
* @param crop Rectangle describing a crop of the decode. It must be contained inside of
* the (possibly scaled, by {@link AImageDecoder_setTargetSize})
* image dimensions. This will affect future calls to
* {@link AImageDecoder_getMinimumStride}, which will now return a
* value based on the width of the crop. An empty ARect -
* specifically { 0, 0, 0, 0 } - may be used to remove the cropping
* behavior. Any other empty or unsorted ARects will result in
* returning {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}.
* @return {@link ANDROID_IMAGE_DECODER_SUCCESS} on success or a value
* indicating the reason for the failure.
*
* Errors:
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}: The
* {@link AImageDecoder} is null, or the crop is not contained by the
* (possibly scaled) image dimensions.
* - {@link ANDROID_IMAGE_DECODER_INVALID_STATE}: The animation is not on
* the first frame.
AImageDecoder_setDataSpace ¶
AImageDecoder_setDataSpace :: proc "c" (decoder: ^AImageDecoder, dataspace: ADataSpace) -> AImageDecoderResult ---
*
* Choose the dataspace for the output.
*
* Ignored by {@link ANDROID_BITMAP_FORMAT_A_8}, which does not support
* an {@link ADataSpace}.
*
* If the encoded image represents an animation, this must be called while on
* the first frame (e.g. before calling {@link AImageDecoder_advanceFrame} or
* after calling {@link AImageDecoder_rewind}).
*
* Available since API level 30.
*
* @param decoder an {@link AImageDecoder} object.
* @param dataspace The {@link ADataSpace} to decode into. An ADataSpace
* specifies how to interpret the colors. By default,
* AImageDecoder will decode into the ADataSpace specified by
* {@link AImageDecoderHeaderInfo_getDataSpace}. If this
* parameter is set to a different ADataSpace, AImageDecoder
* will transform the output into the specified ADataSpace.
* @return {@link ANDROID_IMAGE_DECODER_SUCCESS} on success or a value
* indicating the reason for the failure.
*
* Errors:
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}: The
* {@link AImageDecoder} is null or |dataspace| does not correspond to an
* {@link ADataSpace} value.
* - {@link ANDROID_IMAGE_DECODER_INVALID_STATE}: The animation is not on
* the first frame.
AImageDecoder_setInternallyHandleDisposePrevious ¶
AImageDecoder_setInternallyHandleDisposePrevious :: proc "c" (decoder: ^AImageDecoder, handleInternally: bool) ---
*
* Whether to have AImageDecoder store the frame prior to a
* frame marked {@link ANDROID_IMAGE_DECODER_DISPOSE_OP_PREVIOUS}.
*
* Introduced in API 31.
*
* The default is true. Many images will not have such a frame (it
* is not supported by WebP, and only some GIFs use it). But
* if frame i is ANDROID_IMAGE_DECODER_DISPOSE_OP_PREVIOUS, then i+1
* may depend on i-1. When this setting is true, AImageDecoder will
* defensively copy frame i-1 (i.e. the contents of |pixels| in
* {@link AImageDecoder_decodeImage}) into an internal buffer so that
* it can be used to decode i+1.
*
* AImageDecoder will only store a single frame, at the size specified
* by {@link AImageDecoder_setTargetSize} (or the original dimensions
* if that method has not been called), and will discard it when it is
* no longer necessary.
*
* A client that desires to manually store such frames may set this to
* false, so that AImageDecoder does not need to store this extra
* frame. Instead, when decoding the same
* ANDROID_IMAGE_DECODER_DISPOSE_OP_PREVIOUS frame i, AImageDecoder
* will decode directly into |pixels|, assuming the client stored i-1.
* When asked to decode frame i+1, AImageDecoder will now assume that
* the client provided i-1 in |pixels|.
*
* @param decoder an {@link AImageDecoder} object.
* @param handleInternally Whether AImageDecoder will internally
* handle ANDROID_IMAGE_DECODER_DISPOSE_OP_PREVIOUS
* frames.
AImageDecoder_setTargetSize ¶
AImageDecoder_setTargetSize :: proc "c" (decoder: ^AImageDecoder, width: i32, height: i32) -> AImageDecoderResult ---
*
* Specify the output size for a decoded image.
*
* Future calls to {@link AImageDecoder_decodeImage} will sample or scale the
* encoded image to reach the desired size. If a crop rect is set (via
* {@link AImageDecoder_setCrop}), it must be contained within the dimensions
* specified by width and height, and the output image will be the size of the
* crop rect.
*
* If the encoded image represents an animation, this must be called while on
* the first frame (e.g. before calling {@link AImageDecoder_advanceFrame} or
* after calling {@link AImageDecoder_rewind}).
*
* It is strongly recommended to use setTargetSize only for downscaling, as it
* is often more efficient to scale-up when rendering than up-front due to
* reduced overall memory.
*
* Available since API level 30.
*
* @param decoder an {@link AImageDecoder} object.
* @param width Width of the output (prior to cropping).
* This will affect future calls to
* {@link AImageDecoder_getMinimumStride}, which will now return
* a value based on this width.
* @param height Height of the output (prior to cropping).
* @return {@link ANDROID_IMAGE_DECODER_SUCCESS} on success or a value
* indicating the reason for the failure.
*
* Errors:
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}: The
* {@link AImageDecoder} is null.
* - {@link ANDROID_IMAGE_DECODER_INVALID_SCALE}: |width| or |height| is <= 0,
* the size is too big, any existing crop is not contained by the new image
* dimensions, or the scale is incompatible with a previous call to
* {@link AImageDecoder_setUnpremultipliedRequired}(true).
* - {@link ANDROID_IMAGE_DECODER_INVALID_STATE}: The animation is not on
* the first frame.
AImageDecoder_setUnpremultipliedRequired ¶
AImageDecoder_setUnpremultipliedRequired :: proc "c" (decoder: ^AImageDecoder, unpremultipliedRequired: bool) -> AImageDecoderResult ---
*
* Specify whether the output's pixels should be unpremultiplied.
*
* By default, {@link AImageDecoder_decodeImage} will premultiply the pixels, if they have alpha.
* Pass true to this method to leave them unpremultiplied. This has no effect on an
* opaque image.
*
* If the encoded image represents an animation, this must be called while on
* the first frame (e.g. before calling {@link AImageDecoder_advanceFrame} or
* after calling {@link AImageDecoder_rewind}).
*
* Available since API level 30.
*
* @param decoder an {@link AImageDecoder} object.
* @param unpremultipliedRequired Pass true to leave the pixels unpremultiplied.
* @return {@link ANDROID_IMAGE_DECODER_SUCCESS} on success or a value
* indicating the reason for the failure.
*
* Errors:
* - {@link ANDROID_IMAGE_DECODER_INVALID_CONVERSION}: Unpremultiplied is not
* possible due to an existing scale set by
* {@link AImageDecoder_setTargetSize}.
* - {@link ANDROID_IMAGE_DECODER_BAD_PARAMETER}: The
* {@link AImageDecoder} is null.
* - {@link ANDROID_IMAGE_DECODER_INVALID_STATE}: The animation is not on
* the first frame.
AInputEvent_getDeviceId ¶
AInputEvent_getDeviceId :: proc "c" (event: ^AInputEvent) -> i32 ---
Get the id for the device that an input event came from.
* * Input events can be generated by multiple different input devices. * Use the input device id to obtain information about the input * device that was responsible for generating a particular event. * * An input device id of 0 indicates that the event didn't come from a physical device * other numbers are arbitrary and you shouldn't depend on the values. * Use the provided input device query API to obtain information about input devices.
AInputEvent_getSource ¶
AInputEvent_getSource :: proc "c" (event: ^AInputEvent) -> InputSource ---
Get the input event source.
AInputEvent_getType ¶
AInputEvent_getType :: proc "c" (event: ^AInputEvent) -> InputEventType ---
Get the input event type.
AInputEvent_release ¶
AInputEvent_release :: proc "c" (event: ^AInputEvent) ---
*
* Releases interface objects created by {@link AKeyEvent_fromJava()}
* and {@link AMotionEvent_fromJava()}.
* After returning, the specified {@link AInputEvent}* object becomes invalid and should no longer
* be used. The underlying Java object remains valid and does not change its state.
*
* Available since API level 31.
AInputQueue_attachLooper ¶
AInputQueue_attachLooper :: proc "c" (queue: ^AInputQueue, looper: ^ALooper, ident: i32, callback: ALooper_callbackFunc, data: rawptr) ---
*
* Add this input queue to a looper for processing. See
* {@link ALooper_addFd()} for information on the ident, callback, and data params.
AInputQueue_detachLooper ¶
AInputQueue_detachLooper :: proc "c" (queue: ^AInputQueue) ---
*
* Remove the input queue from the looper it is currently attached to.
AInputQueue_finishEvent ¶
AInputQueue_finishEvent :: proc "c" (queue: ^AInputQueue, event: ^AInputEvent, handled: i32) ---
*
* Report that dispatching has finished with the given event.
* This must be called after receiving an event with {@link AInputQueue_getEvent()}.
AInputQueue_fromJava ¶
AInputQueue_fromJava :: proc "c" (env: ^^JNINativeInterface, inputQueue: jobject) -> ^AInputQueue ---
*
* Returns the {@link AInputQueue}* object associated with the supplied Java InputQueue
* object. The returned native object holds a weak reference to the Java object,
* and is only valid as long as the Java object has not yet been disposed. You
* should ensure that there is a strong reference to the Java object and that it
* has not been disposed before using the returned object.
*
* Available since API level 33.
AInputQueue_getEvent ¶
AInputQueue_getEvent :: proc "c" (queue: ^AInputQueue, outEvent: ^^AInputEvent) -> i32 ---
*
* Returns the next available event from the queue. Returns a negative * value if no events are available or an error has occurred.
AInputQueue_hasEvents ¶
AInputQueue_hasEvents :: proc "c" (queue: ^AInputQueue) -> i32 ---
*
* Returns true if there are one or more events available in the * input queue. Returns 1 if the queue has events 0 if * it does not have events and a negative value if there is an error.
AInputQueue_preDispatchEvent ¶
AInputQueue_preDispatchEvent :: proc "c" (queue: ^AInputQueue, event: ^AInputEvent) -> i32 ---
*
* Sends the key for standard pre-dispatching -- that is, possibly deliver * it to the current IME to be consumed before the app. Returns 0 if it * was not pre-dispatched, meaning you can process it right now. If non-zero * is returned, you must abandon the current event processing and allow the * event to appear again in the event queue (if it does not get consumed during * pre-dispatching).
AKeyEvent_fromJava ¶
AKeyEvent_fromJava :: proc "c" (env: ^^JNINativeInterface, keyEvent: jobject) -> ^AInputEvent ---
*
* Creates a native {@link AInputEvent}* object that is a copy of the specified Java
* android.view.KeyEvent. The result may be used with generic and KeyEvent-specific AInputEvent_*
* functions. The object returned by this function must be disposed using
* {@link AInputEvent_release()}.
*
* Available since API level 31.
AKeyEvent_getAction ¶
AKeyEvent_getAction :: proc "c" (key_event: ^AInputEvent) -> KeyEventAction ---
Get the key event action.
AKeyEvent_getDownTime ¶
AKeyEvent_getDownTime :: proc "c" (key_event: ^AInputEvent) -> i64 ---
*
* Get the time of the most recent key down event, in the * java.lang.System.nanoTime() time base. If this is a down event, * this will be the same as eventTime. * Note that when chording keys, this value is the down time of the most recently * pressed key, which may not be the same physical key of this event.
AKeyEvent_getEventTime ¶
AKeyEvent_getEventTime :: proc "c" (key_event: ^AInputEvent) -> i64 ---
*
* Get the time this event occurred, in the * java.lang.System.nanoTime() time base.
AKeyEvent_getFlags ¶
AKeyEvent_getFlags :: proc "c" (key_event: ^AInputEvent) -> bit_set[KeyEventFlagsBits] ---
Get the key event flags.
AKeyEvent_getKeyCode ¶
AKeyEvent_getKeyCode :: proc "c" (key_event: ^AInputEvent) -> Keycode ---
*
* Get the key code of the key event. * This is the physical key that was pressed, not the Unicode character.
AKeyEvent_getMetaState ¶
AKeyEvent_getMetaState :: proc "c" (key_event: ^AInputEvent) -> bit_set[MetaKeyStateBits; i32] ---
Get the meta key state.
AKeyEvent_getRepeatCount ¶
AKeyEvent_getRepeatCount :: proc "c" (key_event: ^AInputEvent) -> i32 ---
*
* Get the repeat count of the event. * For both key up an key down events, this is the number of times the key has * repeated with the first down starting at 0 and counting up from there. For * multiple key events, this is the number of down/up pairs that have occurred.
AKeyEvent_getScanCode ¶
AKeyEvent_getScanCode :: proc "c" (key_event: ^AInputEvent) -> i32 ---
*
* Get the hardware key id of this key event. * These values are not reliable and vary from device to device.
ALooper_acquire ¶
ALooper_acquire :: proc "c" (looper: ^ALooper) ---
*
* Acquire a reference on the given ALooper object. This prevents the object * from being deleted until the reference is removed. This is only needed * to safely hand an ALooper from one thread to another.
ALooper_addFd ¶
ALooper_addFd :: proc "c" ( looper: ^ALooper, fd: i32, ident: i32, events: bit_set[ALooperFdFlagsBits; i32], callback: ALooper_callbackFunc, data: rawptr, ) -> i32 ---
*
* Adds a new file descriptor to be polled by the looper. * If the same file descriptor was previously added, it is replaced. * * "fd" is the file descriptor to be added. * "ident" is an identifier for this event, which is returned from ALooper_pollOnce(). * The identifier must be >= 0, or ALOOPER_POLL_CALLBACK if providing a non-NULL callback. * "events" are the poll events to wake up on. Typically this is ALOOPER_EVENT_INPUT. * "callback" is the function to call when there is an event on the file descriptor. * "data" is a private data pointer to supply to the callback. * * There are two main uses of this function: * * (1) If "callback" is non-NULL, then this function will be called when there is * data on the file descriptor. It should execute any events it has pending, * appropriately reading from the file descriptor. The 'ident' is ignored in this case. * * (2) If "callback" is NULL, the 'ident' will be returned by ALooper_pollOnce * when its file descriptor has data available, requiring the caller to take * care of processing it. * * Returns 1 if the file descriptor was added or -1 if an error occurred. * * This method can be called on any thread. * This method may block briefly if it needs to wake the poll.
ALooper_forThread ¶
ALooper_forThread :: proc "c" () -> ^ALooper ---
*
* Returns the looper associated with the calling thread, or NULL if * there is not one.
ALooper_pollAll ¶
ALooper_pollAll :: proc "c" (timeoutMillis: i32, outFd: ^i32, outEvents: ^i32, outData: ^rawptr) -> i32 ---
*
* Like ALooper_pollOnce(), but performs all pending callbacks until all * data has been consumed or a file descriptor is available with no callback. * This function will never return ALOOPER_POLL_CALLBACK.
ALooper_pollOnce ¶
ALooper_pollOnce :: proc "c" (timeoutMillis: i32, outFd: ^i32, outEvents: ^i32, outData: ^rawptr) -> i32 ---
*
* Waits for events to be available, with optional timeout in milliseconds. * Invokes callbacks for all file descriptors on which an event occurred. * * If the timeout is zero, returns immediately without blocking. * If the timeout is negative, waits indefinitely until an event appears. * * Returns ALOOPER_POLL_WAKE if the poll was awoken using wake() before * the timeout expired and no callbacks were invoked and no other file * descriptors were ready. * * Returns ALOOPER_POLL_CALLBACK if one or more callbacks were invoked. * * Returns ALOOPER_POLL_TIMEOUT if there was no data before the given * timeout expired. * * Returns ALOOPER_POLL_ERROR if an error occurred. * * Returns a value >= 0 containing an identifier (the same identifier * `ident` passed to ALooper_addFd()) if its file descriptor has data * and it has no callback function (requiring the caller here to * handle it). In this (and only this) case outFd, outEvents and * outData will contain the poll events and data associated with the * fd, otherwise they will be set to NULL. * * This method does not return until it has finished invoking the appropriate callbacks * for all file descriptors that were signalled.
ALooper_prepare ¶
*
* Prepares a looper associated with the calling thread, and returns it. * If the thread already has a looper, it is returned. Otherwise, a new * one is created, associated with the thread, and returned. * * The opts may be ALOOPER_PREPARE_ALLOW_NON_CALLBACKS or 0.
ALooper_release ¶
ALooper_release :: proc "c" (looper: ^ALooper) ---
*
* Remove a reference that was previously acquired with ALooper_acquire().
ALooper_removeFd ¶
*
* Removes a previously added file descriptor from the looper. * * When this method returns, it is safe to close the file descriptor since the looper * will no longer have a reference to it. However, it is possible for the callback to * already be running or for it to run one last time if the file descriptor was already * signalled. Calling code is responsible for ensuring that this case is safely handled. * For example, if the callback takes care of removing itself during its own execution either * by returning 0 or by calling this method, then it can be guaranteed to not be invoked * again at any later time unless registered anew. * * Returns 1 if the file descriptor was removed, 0 if none was previously registered * or -1 if an error occurred. * * This method can be called on any thread. * This method may block briefly if it needs to wake the poll.
ALooper_wake ¶
ALooper_wake :: proc "c" (looper: ^ALooper) ---
*
* Wakes the poll asynchronously. * * This method can be called on any thread. * This method returns immediately.
AMotionEvent_fromJava ¶
AMotionEvent_fromJava :: proc "c" (env: ^^JNINativeInterface, motionEvent: jobject) -> ^AInputEvent ---
*
* Creates a native {@link AInputEvent}* object that is a copy of the specified Java
* android.view.MotionEvent. The result may be used with generic and MotionEvent-specific
* AInputEvent_* functions. The object returned by this function must be disposed using
* {@link AInputEvent_release()}.
*
* Available since API level 31.
AMotionEvent_getAction ¶
AMotionEvent_getAction :: proc "c" (motion_event: ^AInputEvent) -> MotionEventAction ---
Get the combined motion event action code and pointer index.
AMotionEvent_getActionButton ¶
AMotionEvent_getActionButton :: proc "c" (motion_event: ^AInputEvent) -> MotionEventButton ---
*
* Get the action button for the motion event. Returns a valid action button when the * event is associated with a button press or button release action. For other actions * the return value is undefined. * * Available since API level 33. * * @see #AMOTION_EVENT_BUTTON_PRIMARY * @see #AMOTION_EVENT_BUTTON_SECONDARY * @see #AMOTION_EVENT_BUTTON_TERTIARY * @see #AMOTION_EVENT_BUTTON_BACK * @see #AMOTION_EVENT_BUTTON_FORWARD * @see #AMOTION_EVENT_BUTTON_STYLUS_PRIMARY * @see #AMOTION_EVENT_BUTTON_STYLUS_SECONDARY
AMotionEvent_getAxisValue ¶
AMotionEvent_getAxisValue :: proc "c" (motion_event: ^AInputEvent, axis: MotionEventAxis, pointer_index: uint) -> f32 ---
Get the value of the request axis for the given pointer index.
AMotionEvent_getButtonState ¶
AMotionEvent_getButtonState :: proc "c" (motion_event: ^AInputEvent) -> i32 ---
Get the button state of all buttons that are pressed. TODO: Do we have an enum or bit_set for this ??? this might be MotionEventButton but we'll double check by writing some code and printing values.
AMotionEvent_getClassification ¶
AMotionEvent_getClassification :: proc "c" (motion_event: ^AInputEvent) -> AMotionClassification ---
*
* Returns the classification for the current gesture. * The classification may change as more events become available for the same gesture. * * Available since API level 33. * * @see #AMOTION_EVENT_CLASSIFICATION_NONE * @see #AMOTION_EVENT_CLASSIFICATION_AMBIGUOUS_GESTURE * @see #AMOTION_EVENT_CLASSIFICATION_DEEP_PRESS
AMotionEvent_getDownTime ¶
AMotionEvent_getDownTime :: proc "c" (motion_event: ^AInputEvent) -> i64 ---
*
* Get the time when the user originally pressed down to start a stream of * position events, in the java.lang.System.nanoTime() time base.
AMotionEvent_getEdgeFlags ¶
AMotionEvent_getEdgeFlags :: proc "c" (motion_event: ^AInputEvent) -> bit_set[MotionEventEdgeFlagsBits; i32] ---
*
* Get a bitfield indicating which edges, if any, were touched by this motion event. * For touch events, clients can use this to determine if the user's finger was * touching the edge of the display.
AMotionEvent_getEventTime ¶
AMotionEvent_getEventTime :: proc "c" (motion_event: ^AInputEvent) -> i64 ---
*
* Get the time when this specific event was generated, * in the java.lang.System.nanoTime() time base.
AMotionEvent_getFlags ¶
AMotionEvent_getFlags :: proc "c" (motion_event: ^AInputEvent) -> bit_set[MotionEventFlagsBits; i32] ---
Get the motion event flags.
AMotionEvent_getHistoricalAxisValue ¶
AMotionEvent_getHistoricalAxisValue :: proc "c" (motion_event: ^AInputEvent, axis: MotionEventAxis, pointer_index: uint, history_index: uint) -> f32 ---
*
* Get the historical value of the request axis for the given pointer index * that occurred between this event and the previous motion event.
AMotionEvent_getHistoricalEventTime ¶
AMotionEvent_getHistoricalEventTime :: proc "c" (motion_event: ^AInputEvent, history_index: uint) -> i64 ---
*
* Get the time that a historical movement occurred between this event and * the previous event, in the java.lang.System.nanoTime() time base.
AMotionEvent_getHistoricalOrientation ¶
AMotionEvent_getHistoricalOrientation :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint, history_index: uint) -> f32 ---
*
* Get the historical orientation of the touch area and tool area in radians clockwise from * vertical for the given pointer index that * occurred between this event and the previous motion event. * An angle of 0 degrees indicates that the major axis of contact is oriented * upwards, is perfectly circular or is of unknown orientation. A positive angle * indicates that the major axis of contact is oriented to the right. A negative angle * indicates that the major axis of contact is oriented to the left. * The full range is from -PI/2 radians (finger pointing fully left) to PI/2 radians * (finger pointing fully right).
AMotionEvent_getHistoricalPressure ¶
AMotionEvent_getHistoricalPressure :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint, history_index: uint) -> f32 ---
*
* Get the historical pressure of this event for the given pointer index that * occurred between this event and the previous motion event. * The pressure generally ranges from 0 (no pressure at all) to 1 (normal pressure), * although values higher than 1 may be generated depending on the calibration of * the input device.
AMotionEvent_getHistoricalRawX ¶
AMotionEvent_getHistoricalRawX :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint, history_index: uint) -> f32 ---
*
* Get the historical raw X coordinate of this event for the given pointer index that * occurred between this event and the previous motion event. * For touch events on the screen, this is the original location of the event * on the screen, before it had been adjusted for the containing window * and views. * Whole numbers are pixels the value may have a fraction for input devices * that are sub-pixel precise.
AMotionEvent_getHistoricalRawY ¶
AMotionEvent_getHistoricalRawY :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint, history_index: uint) -> f32 ---
*
* Get the historical raw Y coordinate of this event for the given pointer index that * occurred between this event and the previous motion event. * For touch events on the screen, this is the original location of the event * on the screen, before it had been adjusted for the containing window * and views. * Whole numbers are pixels the value may have a fraction for input devices * that are sub-pixel precise.
AMotionEvent_getHistoricalSize ¶
AMotionEvent_getHistoricalSize :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint, history_index: uint) -> f32 ---
*
* Get the current scaled value of the approximate size for the given pointer index that * occurred between this event and the previous motion event. * This represents some approximation of the area of the screen being * pressed the actual value in pixels corresponding to the * touch is normalized with the device specific range of values * and scaled to a value between 0 and 1. The value of size can be used to * determine fat touch events.
AMotionEvent_getHistoricalToolMajor ¶
AMotionEvent_getHistoricalToolMajor :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint, history_index: uint) -> f32 ---
*
* Get the historical length of the major axis of an ellipse that describes the size * of the approaching tool for the given pointer index that * occurred between this event and the previous motion event. * The tool area represents the estimated size of the finger or pen that is * touching the device independent of its actual touch area at the point of contact.
AMotionEvent_getHistoricalToolMinor ¶
AMotionEvent_getHistoricalToolMinor :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint, history_index: uint) -> f32 ---
*
* Get the historical length of the minor axis of an ellipse that describes the size * of the approaching tool for the given pointer index that * occurred between this event and the previous motion event. * The tool area represents the estimated size of the finger or pen that is * touching the device independent of its actual touch area at the point of contact.
AMotionEvent_getHistoricalTouchMajor ¶
AMotionEvent_getHistoricalTouchMajor :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint, history_index: uint) -> f32 ---
*
* Get the historical length of the major axis of an ellipse that describes the touch area * at the point of contact for the given pointer index that * occurred between this event and the previous motion event.
AMotionEvent_getHistoricalTouchMinor ¶
AMotionEvent_getHistoricalTouchMinor :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint, history_index: uint) -> f32 ---
*
* Get the historical length of the minor axis of an ellipse that describes the touch area * at the point of contact for the given pointer index that * occurred between this event and the previous motion event.
AMotionEvent_getHistoricalX ¶
AMotionEvent_getHistoricalX :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint, history_index: uint) -> f32 ---
*
* Get the historical X coordinate of this event for the given pointer index that * occurred between this event and the previous motion event. * Whole numbers are pixels the value may have a fraction for input devices * that are sub-pixel precise.
AMotionEvent_getHistoricalY ¶
AMotionEvent_getHistoricalY :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint, history_index: uint) -> f32 ---
*
* Get the historical Y coordinate of this event for the given pointer index that * occurred between this event and the previous motion event. * Whole numbers are pixels the value may have a fraction for input devices * that are sub-pixel precise.
AMotionEvent_getHistorySize ¶
AMotionEvent_getHistorySize :: proc "c" (motion_event: ^AInputEvent) -> uint ---
*
* Get the number of historical points in this event. These are movements that * have occurred between this event and the previous event. This only applies * to #AMOTION_EVENT_ACTION_MOVE events -- all other actions will have a size of 0. * Historical samples are indexed from oldest to newest.
AMotionEvent_getMetaState ¶
AMotionEvent_getMetaState :: proc "c" (motion_event: ^AInputEvent) -> bit_set[MetaKeyStateBits; i32] ---
*
* Get the state of any meta / modifier keys that were in effect when the * event was generated.
AMotionEvent_getOrientation ¶
AMotionEvent_getOrientation :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint) -> f32 ---
*
* Get the current orientation of the touch area and tool area in radians clockwise from * vertical for the given pointer index. * An angle of 0 degrees indicates that the major axis of contact is oriented * upwards, is perfectly circular or is of unknown orientation. A positive angle * indicates that the major axis of contact is oriented to the right. A negative angle * indicates that the major axis of contact is oriented to the left. * The full range is from -PI/2 radians (finger pointing fully left) to PI/2 radians * (finger pointing fully right).
AMotionEvent_getPointerCount ¶
AMotionEvent_getPointerCount :: proc "c" (motion_event: ^AInputEvent) -> uint ---
*
* Get the number of pointers of data contained in this event. * Always >= 1.
AMotionEvent_getPointerId ¶
AMotionEvent_getPointerId :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint) -> i32 ---
*
* Get the pointer identifier associated with a particular pointer * data index in this event. The identifier tells you the actual pointer * number associated with the data, accounting for individual pointers * going up and down since the start of the current gesture.
AMotionEvent_getPressure ¶
AMotionEvent_getPressure :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint) -> f32 ---
*
* Get the current pressure of this event for the given pointer index. * The pressure generally ranges from 0 (no pressure at all) to 1 (normal pressure), * although values higher than 1 may be generated depending on the calibration of * the input device.
AMotionEvent_getRawX ¶
AMotionEvent_getRawX :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint) -> f32 ---
*
* Get the original raw X coordinate of this event. * For touch events on the screen, this is the original location of the event * on the screen, before it had been adjusted for the containing window * and views.
AMotionEvent_getRawY ¶
AMotionEvent_getRawY :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint) -> f32 ---
*
* Get the original raw X coordinate of this event. * For touch events on the screen, this is the original location of the event * on the screen, before it had been adjusted for the containing window * and views.
AMotionEvent_getSize ¶
AMotionEvent_getSize :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint) -> f32 ---
*
* Get the current scaled value of the approximate size for the given pointer index. * This represents some approximation of the area of the screen being * pressed the actual value in pixels corresponding to the * touch is normalized with the device specific range of values * and scaled to a value between 0 and 1. The value of size can be used to * determine fat touch events.
AMotionEvent_getToolMajor ¶
AMotionEvent_getToolMajor :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint) -> f32 ---
*
* Get the current length of the major axis of an ellipse that describes the size * of the approaching tool for the given pointer index. * The tool area represents the estimated size of the finger or pen that is * touching the device independent of its actual touch area at the point of contact.
AMotionEvent_getToolMinor ¶
AMotionEvent_getToolMinor :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint) -> f32 ---
*
* Get the current length of the minor axis of an ellipse that describes the size * of the approaching tool for the given pointer index. * The tool area represents the estimated size of the finger or pen that is * touching the device independent of its actual touch area at the point of contact.
AMotionEvent_getToolType ¶
AMotionEvent_getToolType :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint) -> ToolType ---
*
* Get the tool type of a pointer for the given pointer index. * The tool type indicates the type of tool used to make contact such as a * finger or stylus, if known.
AMotionEvent_getTouchMajor ¶
AMotionEvent_getTouchMajor :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint) -> f32 ---
*
* Get the current length of the major axis of an ellipse that describes the touch area * at the point of contact for the given pointer index.
AMotionEvent_getTouchMinor ¶
AMotionEvent_getTouchMinor :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint) -> f32 ---
*
* Get the current length of the minor axis of an ellipse that describes the touch area * at the point of contact for the given pointer index.
AMotionEvent_getX ¶
AMotionEvent_getX :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint) -> f32 ---
*
* Get the current X coordinate of this event for the given pointer index. * Whole numbers are pixels the value may have a fraction for input devices * that are sub-pixel precise.
AMotionEvent_getXOffset ¶
AMotionEvent_getXOffset :: proc "c" (motion_event: ^AInputEvent) -> f32 ---
*
* Get the X coordinate offset. * For touch events on the screen, this is the delta that was added to the raw * screen coordinates to adjust for the absolute position of the containing windows * and views.
AMotionEvent_getXPrecision ¶
AMotionEvent_getXPrecision :: proc "c" (#by_ptr motion_event: AInputEvent) -> f32 ---
*
* Get the precision of the X coordinates being reported. * You can multiply this number with an X coordinate sample to find the * actual hardware value of the X coordinate.
AMotionEvent_getY ¶
AMotionEvent_getY :: proc "c" (motion_event: ^AInputEvent, pointer_index: uint) -> f32 ---
*
* Get the current Y coordinate of this event for the given pointer index. * Whole numbers are pixels the value may have a fraction for input devices * that are sub-pixel precise.
AMotionEvent_getYOffset ¶
AMotionEvent_getYOffset :: proc "c" (motion_event: ^AInputEvent) -> f32 ---
*
* Get the Y coordinate offset. * For touch events on the screen, this is the delta that was added to the raw * screen coordinates to adjust for the absolute position of the containing windows * and views.
AMotionEvent_getYPrecision ¶
AMotionEvent_getYPrecision :: proc "c" (#by_ptr motion_event: AInputEvent) -> f32 ---
*
* Get the precision of the Y coordinates being reported. * You can multiply this number with a Y coordinate sample to find the * actual hardware value of the Y coordinate.
ANativeActivity_finish ¶
ANativeActivity_finish :: proc "c" (activity: ^ANativeActivity) ---
*
* Finish the given activity. Its finish() method will be called, causing it * to be stopped and destroyed. Note that this method can be called from * *any* thread it will send a message to the main thread of the process * where the Java finish call will take place.
ANativeActivity_hideSoftInput ¶
ANativeActivity_hideSoftInput :: proc "c" (activity: ^ANativeActivity, flags: HideSoftInputFlags) ---
*
* Hide the IME while in the given activity. Calls InputMethodManager.hideSoftInput() * for the given activity. Note that this method can be called from * *any* thread it will send a message to the main thread of the process * where the Java finish call will take place.
ANativeActivity_setWindowFlags ¶
ANativeActivity_setWindowFlags :: proc "c" (activity: ^ANativeActivity, addFlags: bit_set[WindowFlagsBits; u32], removeFlags: bit_set[WindowFlagsBits; u32]) ---
*
* Change the window flags of the given activity. Calls getWindow().setFlags() * of the given activity. Note that this method can be called from * *any* thread it will send a message to the main thread of the process * where the Java finish call will take place. See window.h for flag constants.
ANativeActivity_setWindowFormat ¶
ANativeActivity_setWindowFormat :: proc "c" (activity: ^ANativeActivity, format: AHardwareBuffer_Format) ---
*
* Change the window format of the given activity. Calls getWindow().setFormat() * of the given activity. Note that this method can be called from * *any* thread it will send a message to the main thread of the process * where the Java finish call will take place.
ANativeActivity_showSoftInput ¶
ANativeActivity_showSoftInput :: proc "c" (activity: ^ANativeActivity, flags: ShowSoftInputFlags) ---
*
* Show the IME while in the given activity. Calls InputMethodManager.showSoftInput() * for the given activity. Note that this method can be called from * *any* thread it will send a message to the main thread of the process * where the Java finish call will take place.
ANativeWindow_acquire ¶
ANativeWindow_acquire :: proc "c" (window: ^ANativeWindow) ---
*
* Acquire a reference on the given {@link ANativeWindow} object. This prevents the object
* from being deleted until the reference is removed.
ANativeWindow_clearFrameRate ¶
ANativeWindow_clearFrameRate :: proc "c" (window: ^ANativeWindow) -> i32 ---
*
* Clears the frame rate which is set for this window. * * This is equivalent to calling * ANativeWindow_setFrameRateWithChangeStrategy(window, 0, compatibility, changeFrameRateStrategy). * * Usage of this API won't introduce frame rate throttling, * or affect other aspects of the application's frame production * pipeline. However, because the system may change the display refresh rate, * calls to this function may result in changes to Choreographer callback * timings, and changes to the time interval at which the system releases * buffers back to the application. * * Note that this only has an effect for windows presented on the display. If * this ANativeWindow is consumed by something other than the system compositor, * e.g. a media codec, this call has no effect. * * You can register for changes in the refresh rate using * \a AChoreographer_registerRefreshRateCallback. * * See ANativeWindow_setFrameRateWithChangeStrategy(). * * Available since API level 31. * * \param window pointer to an ANativeWindow object. * * \return 0 for success, -EINVAL if the window value is invalid.
ANativeWindow_fromSurface ¶
ANativeWindow_fromSurface :: proc "c" (env: ^^JNINativeInterface, surface: jobject) -> ^ANativeWindow ---
*
* Return the ANativeWindow associated with a Java Surface object, * for interacting with it through native code. This acquires a reference * on the ANativeWindow that is returned be sure to use ANativeWindow_release() * when done with it so that it doesn't leak.
ANativeWindow_getBuffersDataSpace ¶
ANativeWindow_getBuffersDataSpace :: proc "c" (window: ^ANativeWindow) -> ADataSpace ---
*
* Get the dataspace of the buffers in window. * * Available since API level 28. * * \return the dataspace of buffers in window, ADATASPACE_UNKNOWN is returned if * dataspace is unknown, or -EINVAL if window is invalid.
ANativeWindow_getBuffersDefaultDataSpace ¶
ANativeWindow_getBuffersDefaultDataSpace :: proc "c" (window: ^ANativeWindow) -> ADataSpace ---
*
* Get the default dataspace of the buffers in window as set by the consumer. * * Available since API level 34. * * \return the dataspace of buffers in window, ADATASPACE_UNKNOWN is returned if * dataspace is unknown, or -EINVAL if window is invalid.
ANativeWindow_getFormat ¶
ANativeWindow_getFormat :: proc "c" (window: ^ANativeWindow) -> AHardwareBuffer_Format ---
*
* Return the current pixel format (AHARDWAREBUFFER_FORMAT_*) of the window surface. * * \return a negative value on error.
ANativeWindow_getHeight ¶
ANativeWindow_getHeight :: proc "c" (window: ^ANativeWindow) -> i32 ---
*
* Return the current height in pixels of the window surface. * * \return a negative value on error.
ANativeWindow_getWidth ¶
ANativeWindow_getWidth :: proc "c" (window: ^ANativeWindow) -> i32 ---
*
* Return the current width in pixels of the window surface. * * \return negative value on error.
ANativeWindow_lock ¶
ANativeWindow_lock :: proc "c" (window: ^ANativeWindow, outBuffer: ^ANativeWindow_Buffer, inOutDirtyBounds: ^ARect) -> i32 ---
*
* Lock the window's next drawing surface for writing.
* inOutDirtyBounds is used as an in/out parameter, upon entering the
* function, it contains the dirty region, that is, the region the caller
* intends to redraw. When the function returns, inOutDirtyBounds is updated
* with the actual area the caller needs to redraw -- this region is often
* extended by {@link ANativeWindow_lock}.
*
* \return 0 for success, or a negative value on error.
ANativeWindow_release ¶
ANativeWindow_release :: proc "c" (window: ^ANativeWindow) ---
*
* Remove a reference that was previously acquired with {@link ANativeWindow_acquire()}.
ANativeWindow_setBuffersDataSpace ¶
ANativeWindow_setBuffersDataSpace :: proc "c" (window: ^ANativeWindow, dataSpace: ADataSpace) -> i32 ---
*
* All buffers queued after this call will be associated with the dataSpace * parameter specified. * * dataSpace specifies additional information about the buffer. * For example, it can be used to convey the color space of the image data in * the buffer, or it can be used to indicate that the buffers contain depth * measurement data instead of color images. The default dataSpace is 0, * ADATASPACE_UNKNOWN, unless it has been overridden by the producer. * * Available since API level 28. * * \param window pointer to an ANativeWindow object. * \param dataSpace data space of all buffers queued after this call. * \return 0 for success, -EINVAL if window is invalid or the dataspace is not * supported.
ANativeWindow_setBuffersGeometry ¶
ANativeWindow_setBuffersGeometry :: proc "c" (window: ^ANativeWindow, width: i32, height: i32, format: i32) -> i32 ---
*
* Change the format and size of the window buffers. * * The width and height control the number of pixels in the buffers, not the * dimensions of the window on screen. If these are different than the * window's physical size, then its buffer will be scaled to match that size * when compositing it to the screen. The width and height must be either both zero * or both non-zero. * * For all of these parameters, if 0 is supplied then the window's base * value will come back in force. * * \param window pointer to an ANativeWindow object. * \param width width of the buffers in pixels. * \param height height of the buffers in pixels. * \param format one of the AHardwareBuffer_Format constants. * \return 0 for success, or a negative value on error.
ANativeWindow_setBuffersTransform ¶
ANativeWindow_setBuffersTransform :: proc "c" (window: ^ANativeWindow, transform: ANativeWindowTransform) -> i32 ---
*
* Set a transform that will be applied to future buffers posted to the window.
*
* Available since API level 26.
*
* \param window pointer to an ANativeWindow object.
* \param transform combination of {@link ANativeWindowTransform} flags
* \return 0 for success, or -EINVAL if \p transform is invalid
ANativeWindow_setFrameRate ¶
ANativeWindow_setFrameRate :: proc "c" (window: ^ANativeWindow, frameRate: f32, compatibility: ANativeWindow_FrameRateCompatibility) -> i32 ---
*
* Same as ANativeWindow_setFrameRateWithChangeStrategy(window, frameRate, compatibility, * ANATIVEWINDOW_CHANGE_FRAME_RATE_ONLY_IF_SEAMLESS). * * See ANativeWindow_setFrameRateWithChangeStrategy(). * * Available since API level 30.
ANativeWindow_setFrameRateWithChangeStrategy ¶
ANativeWindow_setFrameRateWithChangeStrategy :: proc "c" (window: ^ANativeWindow, frameRate: f32, compatibility: ANativeWindow_FrameRateCompatibility, changeFrameRateStrategy: ANativeWindow_ChangeFrameRateStrategy) -> i32 ---
*
* Sets the intended frame rate for this window. * * On devices that are capable of running the display at different refresh * rates, the system may choose a display refresh rate to better match this * window's frame rate. Usage of this API won't introduce frame rate throttling, * or affect other aspects of the application's frame production * pipeline. However, because the system may change the display refresh rate, * calls to this function may result in changes to Choreographer callback * timings, and changes to the time interval at which the system releases * buffers back to the application. * * Note that this only has an effect for windows presented on the display. If * this ANativeWindow is consumed by something other than the system compositor, * e.g. a media codec, this call has no effect. * * You can register for changes in the refresh rate using * \a AChoreographer_registerRefreshRateCallback. * * See ANativeWindow_clearFrameRate(). * * Available since API level 31. * * \param window pointer to an ANativeWindow object. * * \param frameRate The intended frame rate of this window, in frames per * second. 0 is a special value that indicates the app will accept the system's * choice for the display frame rate, which is the default behavior if this * function isn't called. The frameRate param does <em>not</em> need to be a * valid refresh rate for this device's display - e.g., it's fine to pass 30fps * to a device that can only run the display at 60fps. * * \param compatibility The frame rate compatibility of this window. The * compatibility value may influence the system's choice of display refresh * rate. See the ANATIVEWINDOW_FRAME_RATE_COMPATIBILITY_* values for more info. * This parameter is ignored when frameRate is 0. * * \param changeFrameRateStrategy Whether display refresh rate transitions caused by this * window should be seamless. * A seamless transition is one that doesn't have any visual interruptions, such as a black * screen for a second or two. See the ANATIVEWINDOW_CHANGE_FRAME_RATE_* values. * This parameter is ignored when frameRate is 0. * * \return 0 for success, -EINVAL if the window, frame rate, or compatibility * value are invalid.
ANativeWindow_toSurface ¶
ANativeWindow_toSurface :: proc "c" (env: ^^JNINativeInterface, window: ^ANativeWindow) -> jobject ---
*
* Return a Java Surface object derived from the ANativeWindow, for interacting * with it through Java code. The returned Java object acquires a reference on * the ANativeWindow maintains it through general Java object's life cycle; * and will automatically release the reference when the Java object gets garbage * collected. * * Available since API level 26.
ANativeWindow_tryAllocateBuffers ¶
ANativeWindow_tryAllocateBuffers :: proc "c" (window: ^ANativeWindow) ---
*
* Provides a hint to the window that buffers should be preallocated ahead of * time. Note that the window implementation is not guaranteed to preallocate * any buffers, for instance if an implementation disallows allocation of new * buffers, or if there is insufficient memory in the system to preallocate * additional buffers * * Available since API level 30.
ANativeWindow_unlockAndPost ¶
ANativeWindow_unlockAndPost :: proc "c" (window: ^ANativeWindow) -> i32 ---
*
* Unlock the window's drawing surface after previously locking it, * posting the new buffer to the display. * * \return 0 for success, or a negative value on error.
ANeuralNetworksBurst_create ¶
ANeuralNetworksBurst_create :: proc "c" (compilation: ^ANeuralNetworksCompilation, burst: ^^ANeuralNetworksBurst) -> NNResultCode ---
*
* Create a {@link ANeuralNetworksBurst} to apply the given compilation.
* This only creates the burst object. Computation is only performed once
* {@link ANeuralNetworksExecution_burstCompute} is invoked with a valid
* {@link ANeuralNetworksExecution} and {@link ANeuralNetworksBurst}.
*
* <p>The provided compilation must outlive the burst object.</p>
*
* Available since NNAPI feature level 3.
* Available since API level 29.
*
* @param compilation The {@link ANeuralNetworksCompilation} to be evaluated.
* @param burst The newly created object or NULL if unsuccessful.
*
* @return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA
* if the compilation is invalid.
ANeuralNetworksBurst_free ¶
ANeuralNetworksBurst_free :: proc "c" (burst: ^ANeuralNetworksBurst) ---
*
* Destroys the burst object. * * Available since NNAPI feature level 3. * Available since API level 29. * * @param burst The burst object to be destroyed. Passing NULL is acceptable and * results in no operation.
ANeuralNetworksCompilation_create ¶
ANeuralNetworksCompilation_create :: proc "c" (model: ^ANeuralNetworksModel, compilation: ^^ANeuralNetworksCompilation) -> NNResultCode ---
*
* Create a {@link ANeuralNetworksCompilation} to compile the given model.
*
* The model passed to this function is termed the "main model" of the
* compilation, to distinguish it from other models referred to by an Operand
* of type {@link ANEURALNETWORKS_MODEL} within this compilation.
*
* <p>This function only creates the object. Compilation is only performed once
* {@link ANeuralNetworksCompilation_finish} is invoked.</p>
*
* <p>{@link ANeuralNetworksCompilation_finish} should be called once
* all desired properties have been set on the compilation.</p>
*
* <p>{@link ANeuralNetworksModel_free} should be called once the compilation
* is no longer needed.</p>
*
* <p>The provided model must outlive the compilation.</p>
*
* The model must already have been finished by a call to
* {@link ANeuralNetworksModel_finish}.
*
* See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param model The {@link ANeuralNetworksModel} to be compiled.
* @param compilation The newly created object or NULL if unsuccessful.
*
* @return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA
* if the model is invalid.
ANeuralNetworksCompilation_createForDevices ¶
ANeuralNetworksCompilation_createForDevices :: proc "c" (model: ^ANeuralNetworksModel, devices: [^]^ANeuralNetworksDevice, numDevices: u32, compilation: ^^ANeuralNetworksCompilation) -> NNResultCode ---
*
* Create a {@link ANeuralNetworksCompilation} to compile the given model for a specified set
* of devices. If more than one device is specified, the compilation will
* distribute the workload automatically across the devices. The model must be fully
* supported by the specified set of devices. This means that
* ANeuralNetworksModel_getSupportedOperationsForDevices() must have returned true for every
* operation for that model/devices pair.
*
* The user must handle all compilation and execution failures from the
* specified set of devices. This is in contrast to a use of {@link
* ANeuralNetworksCompilation_create}, where the runtime will attempt to recover
* from such failures.
*
* The model passed to this function is termed the "main model" of the
* compilation, to distinguish it from other models referred to by an Operand
* of type {@link ANEURALNETWORKS_MODEL} within this compilation.
*
* @param model The {@link ANeuralNetworksModel} to be compiled.
* @param devices The set of devices. Must not contain duplicates.
* @param numDevices The number of devices in the set.
* @param compilation The newly created object or NULL if unsuccessful.
*
* @return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA
* if the model is invalid.
*
* Available since NNAPI feature level 3.
* Available since API level 29.
ANeuralNetworksCompilation_finish ¶
ANeuralNetworksCompilation_finish :: proc "c" (compilation: ^ANeuralNetworksCompilation) -> NNResultCode ---
*
* Indicate that we have finished modifying a compilation. Required before
* calling {@link ANeuralNetworksBurst_create} or
* {@link ANeuralNetworksExecution_create}.
*
* An application must ensure that no other thread uses the compilation at the
* same time.
*
* This function must only be called once for a given compilation.
*
* If {@link ANeuralNetworksCompilation_setTimeout} was called on this
* compilation, and the compilation is not able to be finished before the
* timeout duration is exceeded, then compilation may be aborted, in which case
* ANEURALNETWORKS_MISSED_DEADLINE_* {@link ResultCode} will be returned.
*
* See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param compilation The compilation to be finished.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksCompilation_free ¶
ANeuralNetworksCompilation_free :: proc "c" (compilation: ^ANeuralNetworksCompilation) ---
*
* Destroy a compilation.
*
* The compilation need not have been finished by a call to
* {@link ANeuralNetworksCompilation_finish}.
*
* See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param compilation The compilation to be destroyed. Passing NULL is acceptable and
* results in no operation.
ANeuralNetworksCompilation_getPreferredMemoryAlignmentForInput ¶
ANeuralNetworksCompilation_getPreferredMemoryAlignmentForInput :: proc "c" (compilation: ^ANeuralNetworksCompilation, index: u32, alignment: ^u32) -> NNResultCode ---
*
* Get the preferred buffer and memory alignment of an input to an execution created from a
* particular compilation.
*
* The user may use the returned alignment value to guide the layout of the input buffer or memory
* pool. To achieve the best performance, make sure the address of the buffer passed in
* {@link ANeuralNetworksExecution_setInput}, or the offset value passed in
* {@link ANeuralNetworksExecution_setInputFromMemory}, is a multiple of the perferred alignment
* value of the same input. A driver may choose to allocate a separate buffer and do memory copying
* if the provided buffer or memory does not satisfy the preferred alignment.
*
* See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
*
* @param compilation The compilation object. It must already have been finished by calling
* {@link ANeuralNetworksCompilation_finish}.
* @param index The index of the input argument we are referencing from the compilation. It is
* an index into the inputs list passed to
* {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not
* the index associated with {@link ANeuralNetworksModel_addOperand}.
* @param alignment The returned preferred alignment in bytes. It will be a power of 2.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
* ANEURALNETWORKS_UNEXPECTED_NULL if either compilation or alignment is NULL.
* ANEURALNETWORKS_BAD_STATE if the compilation has not been finished.
* ANEURALNETWORKS_BAD_DATA if the index is out of range.
*
* Available since NNAPI feature level 5.
* Available since API level 31.
ANeuralNetworksCompilation_getPreferredMemoryAlignmentForOutput ¶
ANeuralNetworksCompilation_getPreferredMemoryAlignmentForOutput :: proc "c" (compilation: ^ANeuralNetworksCompilation, index: u32, alignment: ^u32) -> NNResultCode ---
*
* Get the preferred buffer and memory alignment of an output to an execution created from a
* particular compilation.
*
* The user may use the returned alignment value to guide the layout of the output buffer or memory
* pool. To achieve the best performance, make sure the address of the buffer passed in
* {@link ANeuralNetworksExecution_setOutput}, or the offset value passed in
* {@link ANeuralNetworksExecution_setOutputFromMemory}, is a multiple of the perferred alignment
* value of the same output. A driver may choose to allocate a separate buffer and do memory copying
* if the provided buffer or memory does not satisfy the preferred alignment.
*
* See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
*
* @param compilation The compilation object. It must already have been finished by calling
* {@link ANeuralNetworksCompilation_finish}.
* @param index The index of the output argument we are referencing from the compilation. It is
* an index into the outputs list passed to
* {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not
* the index associated with {@link ANeuralNetworksModel_addOperand}.
* @param alignment The returned perferred alignment in bytes. It will be a power of 2.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
* ANEURALNETWORKS_UNEXPECTED_NULL if either compilation or alignment is NULL.
* ANEURALNETWORKS_BAD_STATE if the compilation has not been finished.
* ANEURALNETWORKS_BAD_DATA if the index is out of range.
*
* Available since NNAPI feature level 5.
* Available since API level 31.
ANeuralNetworksCompilation_getPreferredMemoryPaddingForInput ¶
ANeuralNetworksCompilation_getPreferredMemoryPaddingForInput :: proc "c" (compilation: ^ANeuralNetworksCompilation, index: u32, padding: ^u32) -> NNResultCode ---
*
* Get the preferred buffer and memory end padding of an input to an execution created from a
* particular compilation.
*
* The user may use the returned padding value to guide the layout of the input buffer or memory
* pool. To achieve the best performance, make sure the length value passed in
* {@link ANeuralNetworksExecution_setInput} or
* {@link ANeuralNetworksExecution_setInputFromMemory} is greater than or equal to the raw size of
* the input (i.e. the size of an element multiplied by the number of elements) rounding up to
* a multiple of the perferred padding value of the same input. A driver may choose to allocate a
* separate buffer and do memory copying if the provided buffer or memory value does not satisfy
* the preferred padding.
*
* See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
* See {@link ANeuralNetworksExecution_enableInputAndOutputPadding},
* {@link ANeuralNetworksExecution_setInput}, and
* {@link ANeuralNetworksExecution_setInputFromMemory} for information on passing
* input buffer or memory padding to the driver.
*
* @param compilation The compilation object. It must already have been finished by calling
* {@link ANeuralNetworksCompilation_finish}.
* @param index The index of the input argument we are referencing from the compilation. It is
* an index into the inputs list passed to
* {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not
* the index associated with {@link ANeuralNetworksModel_addOperand}.
* @param padding The returned preferred padding in bytes. It will be a power of 2.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
* ANEURALNETWORKS_UNEXPECTED_NULL if either compilation or padding is NULL.
* ANEURALNETWORKS_BAD_STATE if the compilation has not been finished.
* ANEURALNETWORKS_BAD_DATA if the index is out of range.
*
* Available since NNAPI feature level 5.
* Available since API level 31.
ANeuralNetworksCompilation_getPreferredMemoryPaddingForOutput ¶
ANeuralNetworksCompilation_getPreferredMemoryPaddingForOutput :: proc "c" (compilation: ^ANeuralNetworksCompilation, index: u32, padding: ^u32) -> NNResultCode ---
*
* Get the preferred memory end padding of an output to an execution created from a particular
* compilation.
*
* The user may use the returned padding value to guide the layout of the output buffer or memory
* pool. To achieve the best performance, make sure the length value passed in
* {@link ANeuralNetworksExecution_setOutput} or
* {@link ANeuralNetworksExecution_setOutputFromMemory} is greater than or equal to the raw size of
* the output (i.e. the size of an element multiplied by the number of elements) rounding up to
* a multiple of the perferred padding value of the same output. A driver may choose to allocate a
* separate buffer and do memory copying if the provided buffer or memory value does not satisfy
* the preferred padding.
*
* See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
* See {@link ANeuralNetworksExecution_enableInputAndOutputPadding},
* {@link ANeuralNetworksExecution_setOutput}, and
* {@link ANeuralNetworksExecution_setOutputFromMemory} for information on passing
* output buffer or memory padding to the driver.
*
* @param compilation The compilation object. It must already have been finished by calling
* {@link ANeuralNetworksCompilation_finish}.
* @param index The index of the output argument we are referencing from the compilation. It is
* an index into the outputs list passed to
* {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not
* the index associated with {@link ANeuralNetworksModel_addOperand}.
* @param padding The returned perferred padding in bytes. It will be a power of 2.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
* ANEURALNETWORKS_UNEXPECTED_NULL if either compilation or padding is NULL.
* ANEURALNETWORKS_BAD_STATE if the compilation has not been finished.
* ANEURALNETWORKS_BAD_DATA if the index is out of range.
*
* Available since NNAPI feature level 5.
* Available since API level 31.
ANeuralNetworksCompilation_setCaching ¶
ANeuralNetworksCompilation_setCaching :: proc "c" (compilation: ^ANeuralNetworksCompilation, cacheDir: cstring, token: [^]u8) -> NNResultCode ---
*
* Sets the compilation caching signature and the cache directory.
*
* Provides optional caching information to the runtime for faster repeated
* compilation.
*
* See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
*
* @param compilation The compilation to be modified.
* @param cacheDir The cache directory for the runtime to store and retrieve caching
* data. It is recommended to use the code cache directory provided
* by the Android runtime. If not using the code cache directory, the
* user should choose a directory local to the application, and is
* responsible for managing the cache entries.
* @param token The token provided by the user to specify a model must be of length
* ANEURALNETWORKS_BYTE_SIZE_OF_CACHE_TOKEN. The user should ensure that
* the token is unique to a model within the application. The NNAPI
* runtime cannot detect token collisions a collision will result in a
* failed execution or in a successful execution that produces incorrect
* output values.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
*
* Available since NNAPI feature level 3.
* Available since API level 29.
ANeuralNetworksCompilation_setPreference ¶
ANeuralNetworksCompilation_setPreference :: proc "c" (compilation: ^ANeuralNetworksCompilation, preference: PreferenceCode) -> NNResultCode ---
*
* Sets the execution preference.
*
* <p>Provides guidance to the runtime when trade-offs are possible. By default the runtime
* uses PREFER_SINGLE_FAST_ANSWER</p>
*
* See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param compilation The compilation to be modified.
* @param preference Either {@link ANEURALNETWORKS_PREFER_LOW_POWER},
* {@link ANEURALNETWORKS_PREFER_FAST_SINGLE_ANSWER}, or
* {@link ANEURALNETWORKS_PREFER_SUSTAINED_SPEED}.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksCompilation_setPriority ¶
ANeuralNetworksCompilation_setPriority :: proc "c" (compilation: ^ANeuralNetworksCompilation, priority: PriorityCode) -> NNResultCode ---
*
* Set the execution priority.
*
* Execution priorities are relative to other executions created by the same
* application (specifically same uid) for the same device. Specifically,
* priorities of executions from one application will not affect executions from
* another application. Similarly, priorities of executions on one device will
* not affect executions on another device.
*
* Higher priority executions may use more compute resources than lower priority
* executions, and may preempt or starve lower priority executions.
*
* See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
*
* @param compilation The compilation to be modified.
* @param priority The relative priority of the execution compared to other
* executions created by the application. Must be one of
* ANEURALNETWORKS_PRIORITY_*.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksCompilation_setTimeout ¶
ANeuralNetworksCompilation_setTimeout :: proc "c" (compilation: ^ANeuralNetworksCompilation, duration: u64) -> NNResultCode ---
*
* Set the maximum expected duration for compiling the model.
*
* If the device is not able to complete the compilation within the specified
* duration, the compilation may be aborted. The timeout duration begins at the
* call to {@link ANeuralNetworksCompilation_finish}.
*
* This timeout duration acts as a hint to drivers, and can be used to both free
* up compute resources within the driver and return control back to the
* application quicker than is possible without the hint. It enables drivers
* that are able to estimate how long a compilation will take to abort the
* compilation before it has even started if the driver believes the compilation
* cannot be completed within the timeout duration. Similarly, it enables
* drivers to abort an ongoing compilation if it is taking too long. However,
* this call does not guarantee that the compilation will complete or abort
* within the timeout duration.
*
* By default (i.e., unless ANeuralNetworksCompilation_setTimeout is called),
* the timeout duration for compiling the model is considered infinite.
*
* The {@link ANeuralNetworksCompilation} must have been created with
* {@link ANeuralNetworksCompilation_createForDevices} with numDevices = 1,
* otherwise this function will fail with ANEURALNETWORKS_BAD_DATA. If the
* device has a feature level reported by
* {@link ANeuralNetworksDevice_getFeatureLevel} that is lower than
* {@link ANEURALNETWORKS_FEATURE_LEVEL_4}, then the timeout duration hint will
* be ignored.
*
* See {@link ANeuralNetworksCompilation} for information on multithreaded usage.
*
* @param compilation The compilation to be modified.
* @param duration The maximum amount of time in nanoseconds that is expected to
* be spent finishing a compilation. If this duration is exceeded, the
* compilation may be aborted. If set to 0, the timeout duration is
* considered infinite.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
ANeuralNetworksDevice_getFeatureLevel ¶
ANeuralNetworksDevice_getFeatureLevel :: proc "c" (device: ^ANeuralNetworksDevice, featureLevel: ^FeatureLevelCode) -> NNResultCode ---
*
* Get the NNAPI feature level of the specified NNAPI device.
*
* Each device has a supported feature level, which is the most advanced NNAPI specification
* and features this driver implements. For example, if the driver implements the features
* introduced in {@link ANEURALNETWORKS_FEATURE_LEVEL_2}, but does not implement the features
* introduced after {@link ANEURALNETWORKS_FEATURE_LEVEL_2}, the value would be
* {@link ANEURALNETWORKS_FEATURE_LEVEL_2}. Developers could decide whether or not the specified
* device should be used for a model that has certain feature requirements.
*
* NNAPI device feature level is closely related to NNAPI runtime feature level
* ({@link ANeuralNetworks_getRuntimeFeatureLevel}), which indicates an NNAPI runtime feature
* level (the most advanced NNAPI specification and features that the runtime implements).
* An NNAPI device feature level is always less than or equal to the runtime feature level.
*
* This function produces a {@link FeatureLevelCode} enum value, NOT an Android API level.
*
* @param device The representation of the specified device.
* @param featureLevel {@link FeatureLevelCode} of the most advanced feature this driver implements.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
*
* Available since NNAPI feature level 3.
* Available since API level 29.
ANeuralNetworksDevice_getName ¶
ANeuralNetworksDevice_getName :: proc "c" (device: ^ANeuralNetworksDevice, name: ^cstring) -> NNResultCode ---
*
* Get the name of the specified device.
*
* @param device The representation of the specified device.
* @param name The returned name of the specified device. The name will be in UTF-8
* and will be null-terminated. It will be recognizable as a known device name
* rather than a cryptic string. For devices with feature level reported by
* {@link ANeuralNetworksDevice_getFeatureLevel} that is
* {@link ANEURALNETWORKS_FEATURE_LEVEL_3} and higher, the format of the name is
* {VENDOR}-{DEVICE}. For devices with feature level
* {@link ANEURALNETWORKS_FEATURE_LEVEL_2} or lower, the format of the name is
* undefined. The name will remain valid for the duration of the application.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
*
* Available since NNAPI feature level 3.
* Available since API level 29.
ANeuralNetworksDevice_getType ¶
ANeuralNetworksDevice_getType :: proc "c" (device: ^ANeuralNetworksDevice, type: ^DeviceTypeCode) -> NNResultCode ---
*
* Get the type of a given device.
*
* The device type can be used to help application developers to distribute Machine Learning
* workloads and other workloads such as graphical rendering.
* E.g., for an app which renders AR scenes based on real time object detection results,
* the developer could choose an ACCELERATOR type device for ML workloads, and reserve GPU
* for graphical rendering.
*
* @param device The representation of the specified device.
* @param type The returned {@link DeviceTypeCode} of the specified device.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
*
* Available since NNAPI feature level 3.
* Available since API level 29.
ANeuralNetworksDevice_getVersion ¶
ANeuralNetworksDevice_getVersion :: proc "c" (device: ^ANeuralNetworksDevice, version: ^cstring) -> NNResultCode ---
*
* Get the version of the driver implementation of the specified device.
*
* It’s the responsibility of the driver implementor to insure that this version string
* uniquely distinguishes this implementation from all previous implementations.
*
* This version string must not be confused with the feature level which is solely defined
* by {@link ANeuralNetworksDevice_getFeatureLevel}. There is no implicit ordering of the versions.
* For example, it is not possible to filter all drivers older than a certain version.
*
* Application developers may use this version string to avoid or prefer specific driver
* implementations. For example, an application may want to do so because:
* - A specific version of the driver does not provide the required performance,
* perhaps because of a performance regression.
* - A specific version of the driver has a bug or returns results that don’t match
* the minimum precision requirement for the application.
*
* @param device The representation of the specified device.
* @param version The returned version string of the driver for the specified device. The
* string will be in UTF-8 and will be null-terminated. For devices with feature
* level 28 or lower, "UNKNOWN" will be returned. The version string will remain
* valid for the duration of the application.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
*
* Available since NNAPI feature level 3.
* Available since API level 29.
ANeuralNetworksDevice_wait ¶
ANeuralNetworksDevice_wait :: proc "c" (device: ^ANeuralNetworksDevice) -> NNResultCode ---
*
* Wait until the device is in a live state.
*
* A device may encounter internal errors and temporarily enter a dead state. A
* call that uses a device in such a state will return with the error
* {@link ANEURALNETWORKS_DEAD_OBJECT}. ANeuralNetworksDevice_wait will block until
* the device is in a live state.
*
* @param device The representation of the specified device.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
ANeuralNetworksEvent_createFromSyncFenceFd ¶
ANeuralNetworksEvent_createFromSyncFenceFd :: proc "c" (sync_fence_fd: i32, event: ^^ANeuralNetworksEvent) -> NNResultCode ---
*
* Create a {@link ANeuralNetworksEvent} from a sync_fence file descriptor.
*
* The newly created ANeuralNetworksEvent does not take ownership of the provided sync_fence_fd,
* it will instead dup the provided sync_fence_fd and own the duplicate.
*
* @param sync_fence_fd The sync_fence file descriptor.
* @param event The newly created object or NULL if unsuccessful.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
ANeuralNetworksEvent_free ¶
ANeuralNetworksEvent_free :: proc "c" (event: ^ANeuralNetworksEvent) ---
*
* Destroys the event.
*
* See {@link ANeuralNetworksExecution} for information on multithreaded usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param event The event object to be destroyed. Passing NULL is acceptable and
* results in no operation.
ANeuralNetworksEvent_getSyncFenceFd ¶
ANeuralNetworksEvent_getSyncFenceFd :: proc "c" (event: ^ANeuralNetworksEvent, sync_fence_fd: ^i32) -> NNResultCode ---
*
* Get sync_fence file descriptor from the event.
*
* If the ANeuralNetworksEvent is not backed by a sync fence, the sync_fence_fd
* will be set to -1, and ANEURALNETWORKS_BAD_DATA will be returned.
*
* See {@link ANeuralNetworksEvent_createFromSyncFenceFd} and
* {@link ANeuralNetworksExecution_startComputeWithDependencies} to see how to create
* an event backed by a sync fence.
*
* The user takes ownership of the returned fd, and must close the returned file descriptor when
* it is no longer needed.
*
* @param event An event that is backed by a sync fence.
* @param sync_fence_fd The sync_fence file descriptor. The file descriptor will
* be set to -1 if there is an error.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
ANeuralNetworksEvent_wait ¶
ANeuralNetworksEvent_wait :: proc "c" (event: ^ANeuralNetworksEvent) -> NNResultCode ---
*
* Waits until the execution completes.
*
* More than one thread can wait on an event. When the execution completes,
* all threads will be released.
*
* If {@link ANeuralNetworksExecution_setTimeout} was called on the execution
* corresponding to this event, and the execution is not able to complete
* before the duration is exceeded, the execution may be aborted, in which case
* ANEURALNETWORKS_MISSED_DEADLINE_* {@link ResultCode} will be returned here.
*
* If the execution contains a {@link ANEURALNETWORKS_WHILE} operation, and
* the condition model does not output false within the loop timeout duration,
* the execution will be aborted, and ANEURALNETWORKS_MISSED_DEADLINE_* {@link ResultCode}
* will be returned here.
*
* See {@link ANeuralNetworksExecution} for information on execution states and multithreaded usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param event The event that will be signaled on completion.
* @return ANEURALNETWORKS_NO_ERROR if the execution completed normally.
* ANEURALNETWORKS_UNMAPPABLE if the execution input or output memory cannot
* be properly mapped.
ANeuralNetworksExecution_burstCompute ¶
ANeuralNetworksExecution_burstCompute :: proc "c" (execution: ^ANeuralNetworksExecution, burst: ^ANeuralNetworksBurst) -> NNResultCode ---
*
* Schedule synchronous evaluation of the execution on a burst object.
*
* <p>Schedules synchronous evaluation of the execution. Returns once the
* execution has completed and the outputs are ready to be consumed.</p>
*
* If {@link ANeuralNetworksExecution_setTimeout} was called on the execution,
* and the execution is not able to complete before the timeout duration is
* exceeded, then execution may be aborted, in which case
* ANEURALNETWORKS_MISSED_DEADLINE_* {@link ResultCode} will be returned.
*
* If the execution contains a {@link ANEURALNETWORKS_WHILE} operation, and
* the condition model does not output false within the loop timeout duration,
* then execution will be aborted and ANEURALNETWORKS_MISSED_DEADLINE_* {@link ResultCode}
* will be returned. If the device has a feature level reported by
* {@link ANeuralNetworksDevice_getFeatureLevel} that is lower than
* {@link ANEURALNETWORKS_FEATURE_LEVEL_4}, then the timeout duration hint will be ignored.
*
* <p>There must be at most one {@link ANeuralNetworksExecution} processing at
* any given time for any given burst object. Any
* {@link ANeuralNetworksExecution} launched before the previous has finished
* will result in ANEURALNETWORKS_BAD_STATE.</p>
*
* Before NNAPI feature level 5, this function may only be invoked when the execution is in the
* preparation state. Starting at NNAPI feature level 5, if the user sets the execution to be
* reusable by {@link ANeuralNetworksExecution_setReusable}, this function may also be invoked when
* the execution is in the completed state.
*
* See {@link ANeuralNetworksExecution} for information on execution states and multithreaded usage.
*
* See {@link ANeuralNetworksExecution_compute} for synchronous execution.
* See {@link ANeuralNetworksExecution_startCompute} for regular asynchronous execution.
* See {@link ANeuralNetworksExecution_startComputeWithDependencies} for
* asynchronous execution with dependencies.
*
* Available since NNAPI feature level 3.
* Available since API level 29.
*
* @param burst The burst object to execute on.
* @param execution The execution to be scheduled and executed. The execution
* must be created from the same {@link
* ANeuralNetworksCompilation} as the burst object.
*
* @return ANEURALNETWORKS_NO_ERROR if the execution completed normally.
ANeuralNetworksExecution_compute ¶
ANeuralNetworksExecution_compute :: proc "c" (execution: ^ANeuralNetworksExecution) -> NNResultCode ---
*
* Schedule synchronous evaluation of the execution.
*
* <p>Schedules synchronous evaluation of the execution. Returns once the
* execution has completed and the outputs are ready to be consumed.
* </p>
*
* If {@link ANeuralNetworksExecution_setTimeout} was called on this execution,
* and the execution is not able to complete before the timeout duration is
* exceeded, then execution may be aborted, in which case
* ANEURALNETWORKS_MISSED_DEADLINE_* {@link ResultCode} will be returned. If the device has
* a feature level reported by {@link ANeuralNetworksDevice_getFeatureLevel}
* that is lower than 30, then the timeout duration hint will be ignored.
*
* If this execution contains a {@link ANEURALNETWORKS_WHILE} operation, and
* the condition model does not output false within the loop timeout duration,
* then execution will be aborted and ANEURALNETWORKS_MISSED_DEADLINE_* {@link ResultCode}
* will be returned.
*
* Before NNAPI feature level 5, this function may only be invoked when the execution is in the
* preparation state. Starting at NNAPI feature level 5, if the user sets the execution to be
* reusable by {@link ANeuralNetworksExecution_setReusable}, this function may also be invoked when
* the execution is in the completed state.
*
* See {@link ANeuralNetworksExecution} for information on execution states and multithreaded usage.
*
* See {@link ANeuralNetworksExecution_burstCompute} for burst synchronous execution.
* See {@link ANeuralNetworksExecution_startCompute} for regular asynchronous execution.
* See {@link ANeuralNetworksExecution_startComputeWithDependencies} for
* asynchronous execution with dependencies.
*
* Available since NNAPI feature level 3.
* Available since API level 29.
*
* @param execution The execution to be scheduled and executed.
*
* @return ANEURALNETWORKS_NO_ERROR if the execution completed normally.
* ANEURALNETWORKS_UNMAPPABLE if the execution input or output memory cannot
* be properly mapped.
ANeuralNetworksExecution_create ¶
ANeuralNetworksExecution_create :: proc "c" (compilation: ^ANeuralNetworksCompilation, execution: ^^ANeuralNetworksExecution) -> NNResultCode ---
*
* Create a {@link ANeuralNetworksExecution} to apply the given compilation.
* This only creates the object. Computation is only performed once
* {@link ANeuralNetworksExecution_burstCompute},
* {@link ANeuralNetworksExecution_compute},
* {@link ANeuralNetworksExecution_startCompute} or
* {@link ANeuralNetworksExecution_startComputeWithDependencies} is invoked.
*
* <p>The provided compilation must outlive the execution.</p>
*
* See {@link ANeuralNetworksExecution} for information on multithreaded usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param compilation The {@link ANeuralNetworksCompilation} to be evaluated.
* @param execution The newly created object or NULL if unsuccessful.
*
* @return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA
* if the compilation is invalid.
ANeuralNetworksExecution_enableInputAndOutputPadding ¶
ANeuralNetworksExecution_enableInputAndOutputPadding :: proc "c" (execution: ^ANeuralNetworksExecution, enable: bool) -> NNResultCode ---
*
* Specifies whether the {@link ANeuralNetworksExecution} is able to accept padded input and output
* buffers and memory objects.
*
* By default, the input and output buffers and memory objects of {@link ANeuralNetworksExecution}
* do not allow padding.
*
* Setting the execution to accept padded input and output buffers and memory objects enables the
* length argument of {@link ANeuralNetworksExecution_setInput},
* {@link ANeuralNetworksExecution_setInputFromMemory}, {@link ANeuralNetworksExecution_setOutput},
* and {@link ANeuralNetworksExecution_setOutputFromMemory} to be greater than the raw size of the
* operand (i.e. the size of an element multiplied by the number of elements). The extra bytes
* at the end of the buffer or memory region may be used by the driver to access data in chunks,
* for efficiency.
*
* This method must not be called after {@link ANeuralNetworksExecution_setInput},
* {@link ANeuralNetworksExecution_setInputFromMemory}, {@link ANeuralNetworksExecution_setOutput},
* or {@link ANeuralNetworksExecution_setOutputFromMemory}.
*
* See {@link ANeuralNetworksExecution} for information on multithreaded usage.
*
* @param execution The execution to be modified.
* @param enable 'true' if the execution is to be able to accept padded input and output buffers
* and memory objects, 'false' if not.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
* ANEURALNETWORKS_UNEXPECTED_NULL if execution is NULL.
* ANEURALNETWORKS_BAD_STATE if {@link ANeuralNetworksExecution_setInput},
* {@link ANeuralNetworksExecution_setInputFromMemory},
* {@link ANeuralNetworksExecution_setOutput}, or
* {@link ANeuralNetworksExecution_setOutputFromMemory} has been called on the execution.
*
* Available since NNAPI feature level 5.
* Available since API level 31.
ANeuralNetworksExecution_free ¶
ANeuralNetworksExecution_free :: proc "c" (execution: ^ANeuralNetworksExecution) ---
*
* Destroy an execution.
*
* <p>The execution need not have been scheduled by a call to
* {@link ANeuralNetworksExecution_burstCompute},
* {@link ANeuralNetworksExecution_compute},
* {@link ANeuralNetworksExecution_startCompute} or
* {@link ANeuralNetworksExecution_startComputeWithDependencies} but if it has been scheduled,
* then the application must not call {@link ANeuralNetworksExecution_free}
* until the execution has completed (i.e.,
* {@link ANeuralNetworksExecution_burstCompute},
* {@link ANeuralNetworksExecution_compute}, or
* {@link ANeuralNetworksEvent_wait} has returned).
*
* See {@link ANeuralNetworksExecution} for information on multithreaded usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param execution The execution to be destroyed. Passing NULL is acceptable and
* results in no operation.
ANeuralNetworksExecution_getDuration ¶
ANeuralNetworksExecution_getDuration :: proc "c" (execution: ^ANeuralNetworksExecution, durationCode: DurationCode, duration: ^u64) -> NNResultCode ---
*
* Get the time spent in the latest computation evaluated on the specified
* {@link ANeuralNetworksExecution}, in nanoseconds.
*
* This function may only be invoked when the execution is in the completed state.
*
* See {@link ANeuralNetworksExecution} for information on execution states.
*
* @param execution The execution to be queried.
* @param durationCode The measurement to be queried, specified by {@link DurationCode}.
* @param duration The returned duration. If no measurement was requested by
* {@link ANeuralNetworksExecution_setMeasureTiming}, if the
* device is has a feature level reported by
* {@link ANeuralNetworksDevice_getFeatureLevel} that is lower
* than {@link ANEURALNETWORKS_FEATURE_LEVEL_3}, or for some other
* reason the duration is not available, UINT64_MAX will be returned.
* A particular device need not support any given measurement.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
*
* Available since NNAPI feature level 3.
* Available since API level 29.
ANeuralNetworksExecution_getOutputOperandDimensions ¶
ANeuralNetworksExecution_getOutputOperandDimensions :: proc "c" (execution: ^ANeuralNetworksExecution, index: i32, dimensions: [^]u32) -> NNResultCode ---
*
* Get the dimensional information of the specified output operand of the model of the
* latest computation evaluated on {@link ANeuralNetworksExecution}. The target output operand
* cannot be a scalar.
*
* This function may only be invoked when the execution is in the completed state.
*
* See {@link ANeuralNetworksExecution} for information on execution states.
*
* @param execution The execution to be queried.
* @param index The index of the output argument we are querying. It is an index into the lists
* passed to {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not
* the index associated with {@link ANeuralNetworksModel_addOperand}.
* @param dimensions The dimension array to be filled. The size of the array must be exactly as
* large as the rank of the output operand to be queried in the model.
*
* @return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_OUTPUT_INSUFFICIENT_SIZE
* if the target output is provided an insufficient buffer at execution time,
* ANEURALNETWORKS_BAD_DATA if the index is invalid or if the target is a scalar.
*
* Available since NNAPI feature level 3.
* Available since API level 29.
ANeuralNetworksExecution_getOutputOperandRank ¶
ANeuralNetworksExecution_getOutputOperandRank :: proc "c" (execution: ^ANeuralNetworksExecution, index: i32, rank: ^u32) -> NNResultCode ---
*
* Get the dimensional information of the specified output operand of the model of the
* latest computation evaluated on {@link ANeuralNetworksExecution}.
*
* This function may only be invoked when the execution is in the completed state.
*
* See {@link ANeuralNetworksExecution} for information on execution states.
*
* @param execution The execution to be queried.
* @param index The index of the output argument we are querying. It is
* an index into the lists passed to
* {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not
* the index associated with {@link ANeuralNetworksModel_addOperand}.
* @param rank The rank of the output operand.
*
* @return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_OUTPUT_INSUFFICIENT_SIZE
* if the target output is provided an insufficient buffer at execution time,
* ANEURALNETWORKS_BAD_DATA if the index is invalid.
*
* Available since NNAPI feature level 3.
* Available since API level 29.
ANeuralNetworksExecution_setInput ¶
ANeuralNetworksExecution_setInput :: proc "c" (execution: ^ANeuralNetworksExecution, index: i32, type: ^ANeuralNetworksOperandType, buffer: rawptr, length: uint) -> NNResultCode ---
*
* Associate a user buffer with an input of the model of the
* {@link ANeuralNetworksExecution}. Evaluation of the execution must not have
* been scheduled. Once evaluation of the execution has been scheduled, the
* application must not change the content of the buffer until the execution has
* completed. Evaluation of the execution will not change the content of the
* buffer.
*
* <p>The provided buffer must outlive the execution.</p>
*
* If the input is optional, you can indicate that it is omitted by
* passing nullptr for buffer and 0 for length.
*
* Otherwise, if the user has not set the execution to accept padded input buffers by
* calling {@link ANeuralNetworksExecution_enableInputAndOutputPadding}, then the length argument
* must be equal to the raw size of the input (i.e. the size of an element multiplied by the
* number of elements). Passing a length argument with value not equal to the raw size of the input
* will result in ANEURALNETWORKS_BAD_DATA.
*
* Otherwise, if the user has set the execution to accept padded input buffers by calling
* {@link ANeuralNetworksExecution_enableInputAndOutputPadding}, the length argument may be greater
* than the raw size of the input, and the extra bytes at the end of the buffer may be used
* by the driver to access data in chunks, for efficiency. Passing a length argument with value
* less than the raw size of the input will result in ANEURALNETWORKS_BAD_DATA.
*
* This function may only be invoked when the execution is in the preparation state.
*
* See {@link ANeuralNetworksExecution} for information on execution states and multithreaded usage.
* See {@link ANeuralNetworksCompilation_getPreferredMemoryAlignmentForInput} and
* {@link ANeuralNetworksCompilation_getPreferredMemoryPaddingForInput} for information on getting
* preferred buffer alignment and padding, to improve performance.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param execution The execution to be modified.
* @param index The index of the input argument we are setting. It is
* an index into the lists passed to
* {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not
* the index associated with
* {@link ANeuralNetworksModel_addOperand}.
* @param type The {@link ANeuralNetworksOperandType} of the
* operand. Unless the input is omitted, this should be
* used to specify the dimensions that were left
* unspecified when the operand was added to the
* model. All other properties of the type must be the
* same as specified in the model. If the type is the same
* as specified when the model was built, NULL can be
* passed. Neither the {@link ANeuralNetworksOperandType}
* nor the dimensions it points to need to outlive the call
* to {@link ANeuralNetworksExecution_setInput}.
* @param buffer The buffer containing the data.
* @param length The size of the data value in bytes plus any end padding.
*
* @return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA if the
* name is not recognized or the buffer is too small for the input.
ANeuralNetworksExecution_setInputFromMemory ¶
ANeuralNetworksExecution_setInputFromMemory :: proc "c" ( execution: ^ANeuralNetworksExecution, index: i32, type: ^ANeuralNetworksOperandType, memory: ^ANeuralNetworksMemory, offset: uint, length: uint, ) -> NNResultCode ---
*
* Associate a region of a memory object with an input of the model of the
* {@link ANeuralNetworksExecution}. Evaluation of the execution must not have
* been scheduled. Once evaluation of the execution has been scheduled, the
* application must not change the content of the region until the execution has
* completed. Evaluation of the execution will not change the content of the
* region.
*
* <p>The provided memory must outlive the execution.</p>
*
* If the input is optional, you can indicate that it is omitted by
* using {@link ANeuralNetworksExecution_setInput} instead, passing nullptr for
* buffer and 0 for length.
*
* If the memory is an AHardwareBuffer of a format other than AHARDWAREBUFFER_FORMAT_BLOB created
* from {@link ANeuralNetworksMemory_createFromAHardwareBuffer}, or an opaque memory object created
* from {@link ANeuralNetworksMemory_createFromDesc}, both offset and length must be 0, indicating
* the whole memory is used.
*
* Otherwise, if the user has not set the execution to accept padded input memory objects by
* calling {@link ANeuralNetworksExecution_enableInputAndOutputPadding}, then the length argument
* must be equal to the raw size of the input (i.e. the size of an element multiplied by the
* number of elements). Passing a length argument with value not equal to the raw size of the input
* will result in ANEURALNETWORKS_BAD_DATA.
*
* Otherwise, if the user has set the execution to accept padded input memory objects by calling
* {@link ANeuralNetworksExecution_enableInputAndOutputPadding}, the length argument may be greater
* than the raw size of the input, and the extra bytes at the end of the memory region may be used
* by the driver to access data in chunks, for efficiency. Passing a length argument with value
* less than the raw size of the input will result in ANEURALNETWORKS_BAD_DATA.
*
* This function may only be invoked when the execution is in the preparation state.
*
* See {@link ANeuralNetworksExecution} for information on execution states and multithreaded usage.
* See {@link ANeuralNetworksMemory_createFromAHardwareBuffer} for information on
* AHardwareBuffer usage.
* See {@link ANeuralNetworksMemory_createFromDesc} for information on usage of memory objects
* created from memory descriptors.
* See {@link ANeuralNetworksCompilation_getPreferredMemoryAlignmentForInput} and
* {@link ANeuralNetworksCompilation_getPreferredMemoryPaddingForInput} for information on getting
* preferred memory alignment and padding, to improve performance.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param execution The execution to be modified.
* @param index The index of the input argument we are setting. It is
* an index into the lists passed to
* {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not
* the index associated with {@link ANeuralNetworksModel_addOperand}.
* @param type The {@link ANeuralNetworksOperandType} of the
* operand. This should be used to specify the dimensions
* that were left unspecified when the operand was added
* to the model. All other properties of the type must be
* the same as specified in the model. If the type is the
* same as specified when the model was built, NULL can be
* passed. Neither the {@link ANeuralNetworksOperandType}
* nor the dimensions it points to need to outlive the call
* to {@link ANeuralNetworksExecution_setInputFromMemory}.
* @param memory The memory containing the data.
* @param offset This specifies the location of the data within the memory.
* The offset is in bytes from the start of memory.
* @param length The size of the data value in bytes plus any end padding.
*
* @return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA if the
* name is not recognized or the buffer is too small for the input.
ANeuralNetworksExecution_setLoopTimeout ¶
ANeuralNetworksExecution_setLoopTimeout :: proc "c" (execution: ^ANeuralNetworksExecution, duration: u64) -> NNResultCode ---
*
* Set the maximum duration of WHILE loops in the specified execution.
*
* This is a fuzzy per-loop timeout intended to prevent infinite loops.
*
* If a WHILE loop condition model does not output false within the specified
* duration, the execution will be aborted.
*
* See {@link ANeuralNetworks_getDefaultLoopTimeout} and
* {@link ANeuralNetworks_getMaximumLoopTimeout} for the default
* and maximum timeout values.
*
* This function may only be invoked when the execution is in the preparation state.
*
* See {@link ANeuralNetworksExecution} for information on execution states and multithreaded usage.
*
* @param execution The execution to be modified.
* @param duration The maximum amount of time in nanoseconds that can be spent
* executing a WHILE loop. If the specified duration value exceeds the value
* produced by {@link ANeuralNetworks_getMaximumLoopTimeout}, it will be
* overridden by that value.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
* ANEURALNETWORKS_BAD_STATE if execution has started.
* ANEURALNETWORKS_UNEXPECTED_NULL if execution is NULL.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
ANeuralNetworksExecution_setMeasureTiming ¶
ANeuralNetworksExecution_setMeasureTiming :: proc "c" (execution: ^ANeuralNetworksExecution, measure: bool) -> NNResultCode ---
*
* Specifies whether duration of the {@link ANeuralNetworksExecution} is to be
* measured. Evaluation of the execution must not have been scheduled.
*
* By default, duration is not measured.
*
* The {@link ANeuralNetworksExecution} must have been created from an
* {@link ANeuralNetworksCompilation} which in turn was created from
* {@link ANeuralNetworksCompilation_createForDevices} with numDevices = 1.
* If the device has a feature level reported by
* {@link ANeuralNetworksDevice_getFeatureLevel} that is lower than
* {@link ANEURALNETWORKS_FEATURE_LEVEL_3}, then the duration will not be measured.
*
* This function may only be invoked when the execution is in the preparation state.
*
* See {@link ANeuralNetworksExecution} for information on execution states and multithreaded usage.
*
* Available since NNAPI feature level 3.
* Available since API level 29.
*
* @param execution The execution to be modified.
* @param measure 'true' if duration is to be measured, 'false' if not.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksExecution_setOutput ¶
ANeuralNetworksExecution_setOutput :: proc "c" (execution: ^ANeuralNetworksExecution, index: i32, type: ^ANeuralNetworksOperandType, buffer: rawptr, length: uint) -> NNResultCode ---
*
* Associate a user buffer with an output of the model of the
* {@link ANeuralNetworksExecution}. Evaluation of the execution must not have
* been scheduled. Once evaluation of the execution has been scheduled, the
* application must not change the content of the buffer until the execution has
* completed.
*
* <p>The provided buffer must outlive the execution.</p>
*
* If the output is optional, you can indicate that it is omitted by
* passing nullptr for buffer and 0 for length.
*
* Otherwise, if the user has not set the execution to accept padded output buffers by
* calling {@link ANeuralNetworksExecution_enableInputAndOutputPadding}, then the length argument
* must be equal to the raw size of the output (i.e. the size of an element multiplied by the
* number of elements). Passing a length argument with value not equal to the raw size of the output
* will result in ANEURALNETWORKS_BAD_DATA.
*
* Otherwise, if the user has set the execution to accept padded output buffers by calling
* {@link ANeuralNetworksExecution_enableInputAndOutputPadding}, the length argument may be greater
* than the raw size of the output, and the extra bytes at the end of the buffer may be used
* by the driver to access data in chunks, for efficiency. Passing a length argument with value
* less than the raw size of the output will result in ANEURALNETWORKS_BAD_DATA.
*
* This function may only be invoked when the execution is in the preparation state.
*
* See {@link ANeuralNetworksExecution} for information on execution states and multithreaded usage.
* See {@link ANeuralNetworksCompilation_getPreferredMemoryAlignmentForOutput} and
* {@link ANeuralNetworksCompilation_getPreferredMemoryPaddingForOutput} for information on getting
* preferred buffer alignment and padding, to improve performance.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param execution The execution to be modified.
* @param index The index of the output argument we are setting. It is
* an index into the lists passed to
* {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not
* the index associated with {@link ANeuralNetworksModel_addOperand}.
* @param type The {@link ANeuralNetworksOperandType} of the
* operand. Unless the output is omitted, this should be
* used to specify the dimensions that were left
* unspecified when the operand was added to the
* model. All other properties of the type must be the
* same as specified in the model. If the type is the same
* as specified when the model was built, NULL can be
* passed. Neither the {@link ANeuralNetworksOperandType}
* nor the dimensions it points to need to outlive the call
* to {@link ANeuralNetworksExecution_setOutput}.
* Since NNAPI feature level 3, the output operand can have unspecified
* dimensions or rank to be deduced dynamically during the execution.
* However, the user must provide a large enough buffer. The user
* can retrieve the output dimensional information after the execution
* by {@link ANeuralNetworksExecution_getOutputOperandRank} and
* {@link ANeuralNetworksExecution_getOutputOperandDimensions}.
* @param buffer The buffer where the data is to be written.
* @param length The size of the data value in bytes plus any end padding.
*
* @return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA if the
* name is not recognized or the buffer is too small for the output.
ANeuralNetworksExecution_setOutputFromMemory ¶
ANeuralNetworksExecution_setOutputFromMemory :: proc "c" ( execution: ^ANeuralNetworksExecution, index: i32, type: ^ANeuralNetworksOperandType, memory: ^ANeuralNetworksMemory, offset: uint, length: uint, ) -> NNResultCode ---
*
* Associate a region of a memory object with an output of the model of the
* {@link ANeuralNetworksExecution}. Evaluation of the execution must not have
* been scheduled. Once evaluation of the execution has been scheduled, the
* application must not change the content of the region until the execution has
* completed.
*
* <p>The provided memory must outlive the execution.</p>
*
* If the output is optional, you can indicate that it is omitted by
* using {@link ANeuralNetworksExecution_setOutput} instead, passing nullptr for
* buffer and 0 for length.
*
* If the memory is an AHardwareBuffer of a format other than AHARDWAREBUFFER_FORMAT_BLOB created
* from {@link ANeuralNetworksMemory_createFromAHardwareBuffer}, or an opaque memory object created
* from {@link ANeuralNetworksMemory_createFromDesc}, both offset and length must be 0, indicating
* the whole memory is used.
*
* Otherwise, if the user has not set the execution to accept padded output memory objects by
* calling {@link ANeuralNetworksExecution_enableInputAndOutputPadding}, then the length argument
* must be equal to the raw size of the output (i.e. the size of an element multiplied by the
* number of elements). Passing a length argument with value not equal to the raw size of the output
* will result in ANEURALNETWORKS_BAD_DATA.
*
* Otherwise, if the user has set the execution to accept padded output memory objects by calling
* {@link ANeuralNetworksExecution_enableInputAndOutputPadding}, the length argument may be greater
* than the raw size of the output, and the extra bytes at the end of the memory region may be used
* by the driver to access data in chunks, for efficiency. Passing a length argument with value
* less than the raw size of the output will result in ANEURALNETWORKS_BAD_DATA.
*
* This function may only be invoked when the execution is in the preparation state.
*
* See {@link ANeuralNetworksExecution} for information on execution states and multithreaded usage.
* See {@link ANeuralNetworksMemory_createFromAHardwareBuffer} for information on
* AHardwareBuffer usage.
* See {@link ANeuralNetworksMemory_createFromDesc} for information on usage of memory objects
* created from memory descriptors.
* See {@link ANeuralNetworksCompilation_getPreferredMemoryAlignmentForOutput} and
* {@link ANeuralNetworksCompilation_getPreferredMemoryPaddingForOutput} for information on getting
* preferred memory alignment and padding, to improve performance.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param execution The execution to be modified.
* @param index The index of the output argument we are setting. It is
* an index into the lists passed to
* {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not
* the index associated with {@link ANeuralNetworksModel_addOperand}.
* @param type The {@link ANeuralNetworksOperandType} of the operand. This should be
* used to specify the dimensions that were left
* unspecified when the operand was added to the
* model. All other properties of the type must be the
* same as specified in the model. If the type is the same
* as specified when the model was built, NULL can be
* passed. Neither the {@link ANeuralNetworksOperandType}
* nor the dimensions it points to need to outlive the call
* to {@link ANeuralNetworksExecution_setOutputFromMemory}.
* Since NNAPI feature level 3, the output operand can have unspecified
* dimensions or rank to be deduced dynamically during the execution.
* However, the user must provide a large enough memory. The user
* can retrieve the output dimensional information after the execution
* by {@link ANeuralNetworksExecution_getOutputOperandRank} and
* {@link ANeuralNetworksExecution_getOutputOperandDimensions}.
* @param memory The memory where the data is to be stored.
* @param offset This specifies the location of the data within the memory.
* The offset is in bytes from the start of memory.
* @param length The size of the data value in bytes plus any end padding.
*
* @return ANEURALNETWORKS_NO_ERROR if successful, ANEURALNETWORKS_BAD_DATA if the
* name is not recognized or the buffer is too small for the output.
ANeuralNetworksExecution_setReusable ¶
ANeuralNetworksExecution_setReusable :: proc "c" (execution: ^ANeuralNetworksExecution, reusable: bool) -> NNResultCode ---
*
* Specifies whether the {@link ANeuralNetworksExecution} can be reused for multiple computations.
*
* By default, the {@link ANeuralNetworksExecution} is not reusable.
*
* Setting the execution to be reusable enables multiple computations to be scheduled and evaluated
* on the same execution sequentially, either by means of
* {@link ANeuralNetworksExecution_burstCompute}, {@link ANeuralNetworksExecution_compute},
* {@link ANeuralNetworksExecution_startCompute} or
* {@link ANeuralNetworksExecution_startComputeWithDependencies}: The application may schedule and
* evaluate a computation again from the completed state of a reusable execution.
*
* This function may only be invoked when the execution is in the preparation state.
*
* See {@link ANeuralNetworksExecution} for information on execution states and multithreaded usage.
*
* @param execution The execution to be modified.
* @param reusable 'true' if the execution is to be reusable, 'false' if not.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
* ANEURALNETWORKS_UNEXPECTED_NULL if execution is NULL.
* ANEURALNETWORKS_BAD_STATE if the execution is not in the preparation state.
*
* Available since NNAPI feature level 5.
* Available since API level 31.
ANeuralNetworksExecution_setTimeout ¶
ANeuralNetworksExecution_setTimeout :: proc "c" (execution: ^ANeuralNetworksExecution, duration: u64) -> NNResultCode ---
*
* Set the maximum expected duration of the specified execution.
*
* If the device is not able to complete the execution within the specified
* duration, the execution may be aborted. The timeout duration begins at a
* call to one of:
* - {@link ANeuralNetworksExecution_burstCompute}
* - {@link ANeuralNetworksExecution_compute}
* - {@link ANeuralNetworksExecution_startCompute}
* - {@link ANeuralNetworksExecution_startComputeWithDependencies}
*
* This timeout duration acts as a hint to drivers, and can be used to both free
* up compute resources within the driver and return control back to the
* application quicker than is possible without the hint. It enables drivers
* that are able to estimate how long an execution will take to abort the
* execution before it has even started if the driver believes the execution
* cannot be completed within the timeout duration. Similarly, it enables
* drivers to abort an ongoing execution if it is taking too long. However, this
* call does not guarantee that the execution will complete or abort within the
* timeout duration.
*
* By default (i.e., unless ANeuralNetworksExecution_setTimeout is called),
* the timeout duration for execution is considered infinite.
*
* The {@link ANeuralNetworksExecution} must have been created from an
* {@link ANeuralNetworksCompilation} which in turn was created from
* {@link ANeuralNetworksCompilation_createForDevices} with numDevices = 1,
* otherwise this function will fail with ANEURALNETWORKS_BAD_DATA. If the
* device has a feature level reported by
* {@link ANeuralNetworksDevice_getFeatureLevel} that is lower than
* {@link ANEURALNETWORKS_FEATURE_LEVEL_4}, then the timeout duration hint will
* be ignored.
*
* This function may only be invoked when the execution is in the preparation state.
*
* See {@link ANeuralNetworksExecution} for information on execution states and multithreaded usage.
*
* @param execution The execution to be modified.
* @param duration The maximum amount of time in nanoseconds that is expected to
* be spent executing a model. If this duration is exceeded, the execution
* may be aborted. If set to 0, the timeout duration is considered infinite.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
ANeuralNetworksExecution_startCompute ¶
ANeuralNetworksExecution_startCompute :: proc "c" (execution: ^ANeuralNetworksExecution, event: ^^ANeuralNetworksEvent) -> NNResultCode ---
*
* Schedule asynchronous evaluation of the execution.
*
* <p>Schedules asynchronous evaluation of the execution. Once the execution
* has completed and the outputs are ready to be consumed, the returned event
* will be signaled. Use {@link ANeuralNetworksEvent_wait} to wait for that
* event.
* </p>
*
* ANeuralNetworksEvent_wait must be called to recuperate the resources used
* by the execution.
*
* If {@link ANeuralNetworksExecution_setTimeout} was called on this execution,
* and the execution is not able to complete before the timeout duration is
* exceeded, then execution may be aborted, in which case
* ANEURALNETWORKS_MISSED_DEADLINE_* {@link ResultCode} will be returned through
* {@link ANeuralNetworksExecution_startCompute} or
* {@link ANeuralNetworksEvent_wait} on the event object. If the device has a
* feature level reported by {@link ANeuralNetworksDevice_getFeatureLevel} that
* is lower than {@link ANEURALNETWORKS_FEATURE_LEVEL_4}, then the timeout
* duration hint will be ignored.
*
* If this execution contains a {@link ANEURALNETWORKS_WHILE} operation, and
* the condition model does not output false within the loop timeout duration,
* then execution will be aborted and ANEURALNETWORKS_MISSED_DEADLINE_* {@link ResultCode}
* will be returned through {@link ANeuralNetworksEvent_wait} on the event
* object.
*
* If the device can detect before the execution has started that the execution
* will not complete within the timeout duration, the device may choose to skip
* the execution and instead return ANEURALNETWORKS_MISSED_DEADLINE_* {@link ResultCode}.
*
* Before NNAPI feature level 5, this function may only be invoked when the execution is in the
* preparation state. Starting at NNAPI feature level 5, if the user sets the execution to be
* reusable by {@link ANeuralNetworksExecution_setReusable}, this function may also be invoked when
* the execution is in the completed state.
*
* See {@link ANeuralNetworksExecution} for information on execution states and multithreaded usage.
*
* See {@link ANeuralNetworksExecution_compute} for synchronous execution.
* See {@link ANeuralNetworksExecution_burstCompute} for burst synchronous execution.
* See {@link ANeuralNetworksExecution_startComputeWithDependencies} for
* asynchronous execution with dependencies.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param execution The execution to be scheduled and executed.
* @param event The event that will be signaled on completion. event is set to
* NULL if there's an error.
*
* @return ANEURALNETWORKS_NO_ERROR if the evaluation is successfully scheduled.
ANeuralNetworksExecution_startComputeWithDependencies ¶
ANeuralNetworksExecution_startComputeWithDependencies :: proc "c" (execution: ^ANeuralNetworksExecution, dependencies: ^^ANeuralNetworksEvent, num_dependencies: u32, duration: u64, event: ^^ANeuralNetworksEvent) -> NNResultCode ---
*
* Schedule asynchronous evaluation of the execution with dependencies.
*
* The execution will wait for all the depending events to be signaled before
* starting the evaluation. Once the execution has completed and the outputs
* are ready to be consumed, the returned event will be signaled. Depending on which
* devices are handling the execution, the event could be backed by a sync fence.
* Use {@link ANeuralNetworksEvent_wait} to wait for that event.
*
* ANeuralNetworksEvent_wait must be called to recurperate the resources used
* by the execution.
*
* If parts of the execution are scheduled on devices that do not support fenced execution,
* the function call may wait for such parts to finish before returning.
*
* The function will return an error if any of the events in dependencies is already in a bad
* state. After the execution is scheduled, if any of the events in dependencies does not complete
* normally, the execution will fail, and {@link ANeuralNetworksEvent_wait} on the returned
* event will return an error.
*
* The function will return an error if any of the execution outputs has a tensor operand type
* that is not fully specified.
*
* The function can be passed a timeout duration in nanoseconds. This timeout
* duration acts as a hint to drivers in the same way that the timeout durations
* in {@link ANeuralNetworksCompilation_setTimeout} and {@link
* ANeuralNetworksExecution_setTimeout} act as hints to drivers. The duration
* begins when all waitFor sync fences have been signaled, and can be used
* together with {@link ANeuralNetworksExecution_setTimeout} which specifies the
* maximum timeout duration beginning at the call to
* {@link ANeuralNetworksExecution_startComputeWithDependencies}.
* If the duration is non-zero, the {@link ANeuralNetworksExecution} must have been created
* from an {@link ANeuralNetworksCompilation} which in turn was created from
* {@link ANeuralNetworksCompilation_createForDevices} with numDevices = 1,
* otherwise this function will fail with ANEURALNETWORKS_BAD_DATA. If either
* the timeout duration from {@link ANeuralNetworksExecution_setTimeout} or the
* timeout duration passed to this call is exceeded, the execution may be
* aborted, in which case ANEURALNETWORKS_MISSED_DEADLINE_* {@link ResultCode} will be
* returned through {@link ANeuralNetworksExecution_startComputeWithDependencies}
* or {@link ANeuralNetworksEvent_wait} on the event object. If the device has a
* feature level reported by {@link ANeuralNetworksDevice_getFeatureLevel} that
* is lower than {@link ANEURALNETWORKS_FEATURE_LEVEL_4}, then the timeout duration
* hints will be ignored.
*
* If this execution contains a {@link ANEURALNETWORKS_WHILE} operation, and
* the condition model does not output false within the loop timeout duration,
* then execution will be aborted and ANEURALNETWORKS_MISSED_DEADLINE_* {@link ResultCode}
* will be returned through {@link ANeuralNetworksEvent_wait} on the event
* object.
*
* Before NNAPI feature level 5, this function may only be invoked when the execution is in the
* preparation state. Starting at NNAPI feature level 5, if the user sets the execution to be
* reusable by {@link ANeuralNetworksExecution_setReusable}, this function may also be invoked when
* the execution is in the completed state.
*
* See {@link ANeuralNetworksExecution} for information on execution states and multithreaded usage.
*
* See {@link ANeuralNetworksExecution_compute} for synchronous execution.
* See {@link ANeuralNetworksExecution_burstCompute} for burst synchronous execution.
* See {@link ANeuralNetworksExecution_startCompute} for regular asynchronous execution.
*
* @param execution The execution to be scheduled and executed.
* @param dependencies A set of depending events. The actual evaluation will not start
* until all the events are signaled.
* @param num_dependencies The number of events in the dependencies set.
* @param duration The maximum amount of time in nanoseconds that is expected to
* be spent executing the model after all dependencies are
* signaled. If set to 0, the timeout duration is considered
* infinite.
* @param event The event that will be signaled on completion. event is set to
* NULL if there's an error.
*
* @return ANEURALNETWORKS_NO_ERROR if the evaluation is successfully scheduled.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
ANeuralNetworksMemoryDesc_addInputRole ¶
ANeuralNetworksMemoryDesc_addInputRole :: proc "c" (desc: ^ANeuralNetworksMemoryDesc, compilation: ^ANeuralNetworksCompilation, index: u32, frequency: f32) -> NNResultCode ---
*
* Specify that a memory object will be playing the role of an input to an execution created from a
* particular compilation.
*
* The compilation and the input index fully specify an input operand. This function
* may be invoked multiple times on the same memory descriptor with different input operands,
* and the same input operand may be specified on multiple memory descriptors. However,
* specifying the same input operand on the same memory descriptor more than once will
* return an error.
*
* The dimensions of the corresponding model operands of all the roles specified by
* {@link ANeuralNetworksMemoryDesc_addInputRole} and
* {@link ANeuralNetworksMemoryDesc_addOutputRole} must be compatible with each other. Two
* dimensions are incompatible if both ranks are fully specified but have different values, or if
* there is at least one axis that is fully specified in both but has different values.
*
* At least one of {@link ANeuralNetworksMemoryDesc_addInputRole} and
* {@link ANeuralNetworksMemoryDesc_addOutputRole} must be called on a memory descriptor
* before invoking {@link ANeuralNetworksMemoryDesc_finish}.
*
* Attempting to modify a memory descriptor once {@link ANeuralNetworksMemoryDesc_finish} has been
* called will return an error.
*
* See {@link ANeuralNetworksMemoryDesc} for information on multithreaded usage.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
*
* @param desc The memory descriptor to be modified.
* @param compilation The compilation object. It must already have been finished by calling
* {@link ANeuralNetworksCompilation_finish}, and must outlive the memory
* descriptor.
* @param index The index of the input argument we are referencing from the compilation. It is
* an index into the inputs list passed to
* {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not
* the index associated with {@link ANeuralNetworksModel_addOperand}.
* @param frequency A floating-point value within the range (0.0, 1.0]. Describes how likely the
* memory is to be used in the specified role. This is provided as a hint to
* optimize the case when different roles prefer different memory locations or data
* layouts.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksMemoryDesc_addOutputRole ¶
ANeuralNetworksMemoryDesc_addOutputRole :: proc "c" (desc: ^ANeuralNetworksMemoryDesc, compilation: ^ANeuralNetworksCompilation, index: u32, frequency: f32) -> NNResultCode ---
*
* Specify that a memory object will be playing the role of an output to an execution created from a
* particular compilation.
*
* The compilation and the output index fully specify an output operand. This function
* may be invoked multiple times on the same memory descriptor with different output operands,
* and the same output operand may be specified on multiple memory descriptors. However,
* specifying the same output operand on the same memory descriptor object more than once will
* return an error.
*
* The dimensions of the corresponding model operands of all the roles specified by
* {@link ANeuralNetworksMemoryDesc_addInputRole} and
* {@link ANeuralNetworksMemoryDesc_addOutputRole} must be compatible with each other. Two
* dimensions are incompatible if both ranks are fully specified but have different values, or if
* there is at least one axis that is fully specified in both but has different values.
*
* At least one of {@link ANeuralNetworksMemoryDesc_addInputRole} and
* {@link ANeuralNetworksMemoryDesc_addOutputRole} must be called on the memory descriptor
* before invoking {@link ANeuralNetworksMemoryDesc_finish}.
*
* Attempting to modify a memory descriptor once {@link ANeuralNetworksMemoryDesc_finish} has been
* called will return an error.
*
* See {@link ANeuralNetworksMemoryDesc} for information on multithreaded usage.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
*
* @param desc The memory descriptor to be modified.
* @param compilation The compilation object. It must already have been finished by calling
* {@link ANeuralNetworksCompilation_finish}, and must outlive the memory
* descriptor.
* @param index The index of the output argument we are referencing from the compilation. It is
* an index into the outputs list passed to
* {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not
* the index associated with {@link ANeuralNetworksModel_addOperand}.
* @param frequency A floating-point value within the range (0.0, 1.0]. Describes how likely the
* memory is to be used in the specified role. This is provided as a hint to
* optimize the case when multiple roles prefer different memory locations or data
* layouts.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksMemoryDesc_create ¶
ANeuralNetworksMemoryDesc_create :: proc "c" (desc: ^^ANeuralNetworksMemoryDesc) -> NNResultCode ---
*
* Create a {@link ANeuralNetworksMemoryDesc} with no properties.
*
* This only creates the memory descriptor. Its properties should be set with calls to
* {@link ANeuralNetworksMemoryDesc_addInputRole},
* {@link ANeuralNetworksMemoryDesc_addOutputRole}, and
* {@link ANeuralNetworksMemoryDesc_setDimensions}.
*
* {@link ANeuralNetworksMemoryDesc_finish} must be called once all properties have been set.
*
* {@link ANeuralNetworksMemoryDesc_free} must be called once the memory descriptor
* is no longer needed.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
*
* @param desc The {@link ANeuralNetworksMemoryDesc} to be created.
* Set to NULL if unsuccessful.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksMemoryDesc_finish ¶
ANeuralNetworksMemoryDesc_finish :: proc "c" (desc: ^ANeuralNetworksMemoryDesc) -> NNResultCode ---
*
* Indicate that we have finished modifying a memory descriptor. Required before calling
* {@link ANeuralNetworksMemory_createFromDesc}.
*
* This function must only be called once for a given memory descriptor.
*
* See {@link ANeuralNetworksMemoryDesc} for information on multithreaded usage.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
*
* @param desc The memory descriptor to be finished.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksMemoryDesc_free ¶
ANeuralNetworksMemoryDesc_free :: proc "c" (desc: ^ANeuralNetworksMemoryDesc) ---
*
* Destroy a memory descriptor.
*
* The memory descriptor need not have been finished by a call to
* {@link ANeuralNetworksMemoryDesc_finish}.
*
* See {@link ANeuralNetworksMemoryDesc} for information on multithreaded usage.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
*
* @param desc The memory descriptor to be destroyed. Passing NULL is acceptable and
* results in no operation.
ANeuralNetworksMemoryDesc_setDimensions ¶
ANeuralNetworksMemoryDesc_setDimensions :: proc "c" (desc: ^ANeuralNetworksMemoryDesc, rank: u32, dimensions: [^]u32) -> NNResultCode ---
*
* Set the dimensional information of the memory descriptor.
*
* The specified dimensions must be compatible with the dimensions of the corresponding model
* operands of all the roles specified by {@link ANeuralNetworksMemoryDesc_addInputRole} and
* {@link ANeuralNetworksMemoryDesc_addOutputRole}. Two dimensions are incompatible if both ranks
* are fully specified but have different values, or if there is at least one axis that is fully
* specified in both but has different values.
*
* Attempting to modify a memory descriptor once {@link ANeuralNetworksMemoryDesc_finish} has been
* called will return an error.
*
* See {@link ANeuralNetworksMemoryDesc} for information on multithreaded usage.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
*
* @param desc The memory descriptor to be modified.
* @param rank The number of dimensions. Must be 0 for scalars.
* @param dimensions An array of dimensions. An entry with the value 0 indicates that the
* corresponding axis has an unknown size.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksMemory_copy ¶
ANeuralNetworksMemory_copy :: proc "c" (src: ^ANeuralNetworksMemory, dst: ^ANeuralNetworksMemory) -> NNResultCode ---
*
* Copies data from one memory object to another.
*
* If at most one of the src and dst is created from {@link ANeuralNetworksMemory_createFromDesc},
* the src and dst must have the same logical size:
* - If the memory is created from {@link ANeuralNetworksMemory_createFromFd}, or if it is created
* from {@link ANeuralNetworksMemory_createFromAHardwareBuffer} with format of
* AHARDWAREBUFFER_FORMAT_BLOB, the logical size equals the size of the memory.
* - If the memory is created from {@link ANeuralNetworksMemory_createFromAHardwareBuffer} with a
* format other than AHARDWAREBUFFER_FORMAT_BLOB, the logical size equals the size when there is
* no padding and the data is tightly packed. This function may fail if the AHardwareBuffer
* cannot be accessed.
* - If the memory is created from {@link ANeuralNetworksMemory_createFromDesc}, the logical size
* equals the size indicated by the {@link OperandCode} multiplied by the number of elements. This
* function will fail if the number of elements is unknown.
*
* If both src and dst are created from {@link ANeuralNetworksMemory_createFromDesc}, they must have
* compatible dimensions. Two dimensions are incompatible if both ranks are fully specified but
* have different values, or if there is at least one axis that is fully specified in both but has
* different values. The dst may have unspecified dimensions or rank. In such a case, the dimensions
* of dst will get updated according to the dimensions of the src.
*
* In both cases, if the src is created from {@link ANeuralNetworksMemory_createFromDesc}, it must
* have been used as an output in a successful execution, or used as the destination memory in a
* successful {@link ANeuralNetworksMemory_copy}.
*
* The src and dst may have different data layout, in which case the data copying is performed
* logically with data layout transformation.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
*
* @param src The source memory object.
* @param dst The destination memory object.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksMemory_createFromAHardwareBuffer ¶
ANeuralNetworksMemory_createFromAHardwareBuffer :: proc "c" (ahwb: ^AHardwareBuffer, memory: ^^ANeuralNetworksMemory) -> NNResultCode ---
*
* Creates a shared memory object from an AHardwareBuffer handle.
*
* If the shared memory is backed by an AHardwareBuffer of AHARDWAREBUFFER_FORMAT_BLOB
* format, it can be used the same way as shared memory created from a file handle. See
* {@link ANeuralNetworksMemory} for a description on how to use this shared memory.
*
* If the shared memory is backed by an AHardwareBuffer of a format other than
* AHARDWAREBUFFER_FORMAT_BLOB, it can only be used for model inputs and outputs.
* When calling {@link ANeuralNetworksExecution_setInputFromMemory} or
* {@link ANeuralNetworksExecution_setOutputFromMemory} with the shared memory, both
* offset and length must be set to zero and the entire memory region will be
* associated with the specified input or output operand. There is no guarantee
* that an arbitrary AHardwareBuffer_Format and AHardwareBuffer_UsageFlags combination
* can be used by arbitrary devices. The execution will fail if the selected set of
* devices cannot consume the buffer.
*
* Calling {@link ANeuralNetworksModel_setOperandValueFromMemory} with shared memory
* backed by an AHardwareBuffer of a format other than AHARDWAREBUFFER_FORMAT_BLOB is
* disallowed.
*
* The provided AHardwareBuffer must outlive the ANeuralNetworksMemory object.
*
* Available since NNAPI feature level 3.
* Available since API level 29.
*
* @param ahwb The AHardwareBuffer handle.
* @param memory The memory object to be created.
* Set to NULL if unsuccessful.
*
* @return ANEURALNETWORKS_NO_ERROR if the request completed normally.
*
* @see AHardwareBuffer
ANeuralNetworksMemory_createFromDesc ¶
ANeuralNetworksMemory_createFromDesc :: proc "c" (desc: ^ANeuralNetworksMemoryDesc, memory: ^^ANeuralNetworksMemory) -> NNResultCode ---
*
* Creates a memory object from a memory descriptor.
*
* The memory object is created with an uninitialized buffer. A memory object with an uninitialized
* buffer may only be used according to the roles specified by {@link
* ANeuralNetworksMemoryDesc_addOutputRole}, or as the destination memory in {@link
* ANeuralNetworksMemory_copy}. The buffer of a memory object
* is initialized after the memory object
* is used as an output in a successful execution, or used as the destination memory in a successful
* {@link ANeuralNetworksMemory_copy}. A memory object with an initialized buffer may be used
* according to all roles specified in {@link ANeuralNetworksMemoryDesc}, or as the source or
* destination memory in {@link ANeuralNetworksMemory_copy}. The buffer of a memory object will
* return to the uninitialized state if the memory object is used as an output in a failed
* execution, or used as the destination memory in a failed {@link ANeuralNetworksMemory_copy}.
*
* The dimensions of the memory descriptor are deduced from the dimensions of the corresponding
* model operands of all the roles specified by {@link ANeuralNetworksMemoryDesc_addInputRole} and
* {@link ANeuralNetworksMemoryDesc_addOutputRole}, as well as the dimensions set by the call to
* {@link ANeuralNetworksMemoryDesc_setDimensions}, if any. The memory descriptor may have
* unspecified dimensions or rank. In such a case, the same memory object may be used with different
* shapes of outputs in different executions. When the memory is used as an input, the input shape
* must be the same as the output shape from the last execution using this memory object as an
* output, or the last {@link ANeuralNetworksMemory_copy} using this memory object as the
* destination memory. Creating a memory object with unspecified dimensions or rank may fail for
* certain sets of roles.
*
* Using the memory in roles or shapes that are not compatible with the rules specified above will
* return an error.
*
* When calling {@link ANeuralNetworksExecution_setInputFromMemory} or
* {@link ANeuralNetworksExecution_setOutputFromMemory} with the memory object,
* both offset and length must be set to zero and the entire memory region will be
* associated with the specified input or output operand.
*
* Calling {@link ANeuralNetworksModel_setOperandValueFromMemory} with the memory created from this
* function will return an error.
*
* {@link ANeuralNetworksMemory_free} must be called once the memory is no longer needed.
*
* Attempting to create memory from an unfinished memory descriptor will return an error.
*
* The provided {@link ANeuralNetworksMemoryDesc} need not outlive the {@link ANeuralNetworksMemory}
* object.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
*
* @param desc The memory descriptor.
* @param memory The memory object to be created.
* Set to NULL if unsuccessful.
*
* @return ANEURALNETWORKS_NO_ERROR if successful ANEURALNETWORKS_OP_FAILED if the memory is
* created with unspecified dimensions or rank and it is not supported for this set of
* roles.
ANeuralNetworksMemory_createFromFd ¶
ANeuralNetworksMemory_createFromFd :: proc "c" (size: uint, protect: i32, fd: i32, offset: uint, memory: ^^ANeuralNetworksMemory) -> NNResultCode ---
*
* Creates a shared memory object from a file descriptor.
*
* The shared memory is backed by a file descriptor via mmap.
* See {@link ANeuralNetworksMemory} for a description on how to use
* this shared memory.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param size The requested size in bytes.
* Must not be larger than the file size.
* @param protect The desired memory protection for the mapping.
* It is either PROT_NONE or the bitwise OR of one or
* more of the following flags: PROT_READ, PROT_WRITE.
* @param fd The requested file descriptor.
* The file descriptor has to be mmap-able. The file
* descriptor will be duplicated.
* @param offset The offset to the beginning of the file of the area to map.
* The offset has to be aligned to a page size.
* @param memory The memory object to be created.
* Set to NULL if unsuccessful.
*
* @return ANEURALNETWORKS_NO_ERROR if the request completed normally.
ANeuralNetworksMemory_free ¶
ANeuralNetworksMemory_free :: proc "c" (memory: ^ANeuralNetworksMemory) ---
*
* Delete a memory object. * * Destroys the object used by the run time to keep track of the memory. * This will free the underlying actual memory if no other code has open * handles to this memory. * * Available since NNAPI feature level 1. * Available since API level 27. * * @param memory The memory object to be freed. Passing NULL is acceptable and * results in no operation.
ANeuralNetworksModel_addOperand ¶
ANeuralNetworksModel_addOperand :: proc "c" (model: ^ANeuralNetworksModel, type: ^ANeuralNetworksOperandType) -> NNResultCode ---
*
* Add an operand to a model.
*
* The order in which the operands are added is important. The first one added
* to a model will have the index value 0, the second 1, etc. These indexes are
* used as operand identifiers in
* {@link ANeuralNetworksModel_addOperation},
* {@link ANeuralNetworksModel_identifyInputsAndOutputs},
* {@link ANeuralNetworksModel_setOperandValue},
* {@link ANeuralNetworksModel_setOperandValueFromMemory},
* {@link ANeuralNetworksExecution_setInput},
* {@link ANeuralNetworksExecution_setInputFromMemory},
* {@link ANeuralNetworksExecution_setOutput}, and
* {@link ANeuralNetworksExecution_setOutputFromMemory}.
*
* <p>Every operand must be referenced in exactly one of the following
* ways:<ul>
* <li>It is identified as a model input with
* {@link ANeuralNetworksModel_identifyInputsAndOutputs}.</li>
* <li>It is identified as a constant with
* {@link ANeuralNetworksModel_setOperandValue} or
* {@link ANeuralNetworksModel_setOperandValueFromMemory}.</li>
* <li>It is identified as an output of exactly one operation with
* {@link ANeuralNetworksModel_addOperation}.</li>
* </ul></p>
* <p>An operand that is identified as a model input or as a constant
* must not also be identified as a model output with
* {@link ANeuralNetworksModel_identifyInputsAndOutputs}.</p>
*
* To build a model that can accommodate inputs of various sizes, as
* you may want to do for a CNN, leave unspecified the dimensions that
* will vary at run time. If you do so, fully specify dimensions
* when calling {@link ANeuralNetworksExecution_setInput} or
* {@link ANeuralNetworksExecution_setInputFromMemory}.
*
* Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been
* called will return an error.
*
* See {@link ANeuralNetworksModel} for information on multithreaded usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param model The model to be modified.
* @param type The {@link ANeuralNetworksOperandType} that describes the shape
* of the operand. Neither the {@link ANeuralNetworksOperandType}
* nor the dimensions it points to need to outlive the call to
* {@link ANeuralNetworksModel_addOperand}.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksModel_addOperation ¶
ANeuralNetworksModel_addOperation :: proc "c" ( model: ^ANeuralNetworksModel, type: OperationCode, inputCount: u32, inputs: [^]u32, outputCount: u32, outputs: [^]u32, ) -> NNResultCode ---
*
* Add an operation to a model.
*
* @param model The model to be modified.
* @param type The {@link ANeuralNetworksOperationType} of the operation.
* @param inputCount The number of entries in the inputs array.
* @param inputs An array of indexes identifying each operand.
* @param outputCount The number of entries in the outputs array.
* @param outputs An array of indexes identifying each operand.
*
* The operands specified by inputs and outputs must have been
* previously added by calls to {@link ANeuralNetworksModel_addOperand}.
*
* Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been
* called will return an error.
*
* See {@link ANeuralNetworksModel} for information on multithreaded usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksModel_create ¶
ANeuralNetworksModel_create :: proc "c" (model: ^^ANeuralNetworksModel) -> NNResultCode ---
*
* Create an empty {@link ANeuralNetworksModel}.
*
* <p>This only creates the object. Computation is performed once
* {@link ANeuralNetworksExecution_burstCompute},
* {@link ANeuralNetworksExecution_compute},
* {@link ANeuralNetworksExecution_startCompute} or
* {@link ANeuralNetworksExecution_startComputeWithDependencies} is invoked.
*
* The model should be constructed with calls to
* {@link ANeuralNetworksModel_addOperation} and
* {@link ANeuralNetworksModel_addOperand}
*
* <p>{@link ANeuralNetworksModel_finish} should be called once the model
* has been fully constructed.</p>
*
* <p>{@link ANeuralNetworksModel_free} should be called once the model
* is no longer needed.</p>
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param model The {@link ANeuralNetworksModel} to be created.
* Set to NULL if unsuccessful.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksModel_finish ¶
ANeuralNetworksModel_finish :: proc "c" (model: ^ANeuralNetworksModel) -> NNResultCode ---
*
* Indicate that we have finished modifying a model. Required before
* calling {@link ANeuralNetworksCompilation_create} and
* {@link ANeuralNetworksCompilation_createForDevices}.
*
* An application must ensure that no other thread uses the model at the same
* time.
*
* This function must only be called once for a given model.
*
* See {@link ANeuralNetworksModel} for information on multithreaded usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param model The model to be finished.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksModel_free ¶
ANeuralNetworksModel_free :: proc "c" (model: ^ANeuralNetworksModel) ---
*
* Destroy a model.
*
* The model need not have been finished by a call to
* {@link ANeuralNetworksModel_finish}.
*
* See {@link ANeuralNetworksModel} for information on multithreaded usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param model The model to be destroyed. Passing NULL is acceptable and
* results in no operation.
ANeuralNetworksModel_getSupportedOperationsForDevices ¶
ANeuralNetworksModel_getSupportedOperationsForDevices :: proc "c" (model: ^ANeuralNetworksModel, devices: [^]^ANeuralNetworksDevice, numDevices: u32, supportedOps: [^]bool) -> NNResultCode ---
*
* Get the supported operations for a specified set of devices. If multiple devices * are selected, the supported operation list is a union of supported operations of all * selected devices. * * @param model The model to be queried. * @param devices The set of devices. Must not contain duplicates. * @param numDevices The number of devices in the set. * @param supportedOps The boolean array to be filled. True means supported. The size of the * boolean array must be at least as large as the number of operations * in the model. The order of elements in the supportedOps array matches * the order in which the corresponding operations were added to the model. * * @return ANEURALNETWORKS_NO_ERROR if successful. * * Available since NNAPI feature level 3. * Available since API level 29.
ANeuralNetworksModel_identifyInputsAndOutputs ¶
ANeuralNetworksModel_identifyInputsAndOutputs :: proc "c" (model: ^ANeuralNetworksModel, inputCount: u32, inputs: [^]u32, outputCount: u32, outputs: [^]u32) -> NNResultCode ---
*
* Specifies which operands will be the model's inputs and
* outputs. Every model must have at least one input and one output.
*
* An operand cannot be used for both input and output. Doing so will
* return an error.
*
* @param model The model to be modified.
* @param inputCount The number of entries in the inputs array.
* @param inputs An array of indexes identifying the input operands.
* @param outputCount The number of entries in the outputs array.
* @param outputs An array of indexes identifying the output operands.
*
* The operands specified by inputs and outputs must have been
* previously added by calls to {@link ANeuralNetworksModel_addOperand}.
*
* Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been
* called will return an error.
*
* See {@link ANeuralNetworksModel} for information on multithreaded usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
ANeuralNetworksModel_relaxComputationFloat32toFloat16 ¶
ANeuralNetworksModel_relaxComputationFloat32toFloat16 :: proc "c" (model: ^ANeuralNetworksModel, allow: bool) -> NNResultCode ---
*
* Specifies whether {@link ANEURALNETWORKS_TENSOR_FLOAT32} is allowed to be
* calculated with range and/or precision as low as that of the IEEE 754 16-bit
* floating-point format. By default, {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* must be calculated using at least the range and precision of the IEEE 754
* 32-bit floating-point format.
*
* The relaxComputationFloat32toFloat16 setting of the main model of
* a compilation overrides the values of the referenced models.
*
* @param model The model to be modified.
* @param allow 'true' indicates {@link ANEURALNETWORKS_TENSOR_FLOAT32} may be
* calculated with range and/or precision as low as that of the
* IEEE 754 16-bit floating point format. 'false' indicates
* {@link ANEURALNETWORKS_TENSOR_FLOAT32} must be calculated using
* at least the range and precision of the IEEE 754 32-bit floating
* point format.
*
* Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been
* called will return an error.
*
* Available since NNAPI feature level 2.
* Available since API level 28.
*
* See {@link ANeuralNetworksModel} for information on multithreaded usage.
ANeuralNetworksModel_setOperandSymmPerChannelQuantParams ¶
ANeuralNetworksModel_setOperandSymmPerChannelQuantParams :: proc "c" (model: ^ANeuralNetworksModel, index: i32, channelQuant: ^ANeuralNetworksSymmPerChannelQuantParams) -> NNResultCode ---
*
* Sets an operand's per channel quantization parameters.
*
* Sets parameters required by a tensor of type
* {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL}.
* This function must be called for every tensor of type
* {@link ANEURALNETWORKS_TENSOR_QUANT8_SYMM_PER_CHANNEL} before
* calling {@link ANeuralNetworksModel_finish}.
*
* Available since NNAPI feature level 3.
* Available since API level 29.
*
* @param model The model to be modified.
* @param index The index of the model operand we're setting.
* @param channelQuant The per channel quantization parameters for the operand.
* No memory in this struct needs to outlive the call to
* this function.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksModel_setOperandValue ¶
ANeuralNetworksModel_setOperandValue :: proc "c" (model: ^ANeuralNetworksModel, index: i32, buffer: rawptr, length: uint) -> NNResultCode ---
*
* Sets an operand to a constant value.
*
* Values of length smaller or equal to
* ANEURALNETWORKS_MAX_SIZE_OF_IMMEDIATELY_COPIED_VALUES
* are immediately copied into the model.
*
* For values of length greater than
* ANEURALNETWORKS_MAX_SIZE_OF_IMMEDIATELY_COPIED_VALUES, a pointer to
* the buffer is stored within the model. The application must not change the
* content of this region until all executions using this model have
* completed. As the data may be copied during processing, modifying the data
* after this call yields undefined results. The provided buffer must outlive
* this model.
*
* For large tensors, using {@link ANeuralNetworksModel_setOperandValueFromMemory}
* is likely to be more efficient.
*
* To indicate that an optional operand should be considered missing,
* pass nullptr for buffer and 0 for length.
*
* Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been
* called will return an error.
*
* See {@link ANeuralNetworksModel} for information on multithreaded usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param model The model to be modified.
* @param index The index of the model operand we're setting.
* @param buffer A pointer to the data to use.
* @param length The size in bytes of the data value.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksModel_setOperandValueFromMemory ¶
ANeuralNetworksModel_setOperandValueFromMemory :: proc "c" (model: ^ANeuralNetworksModel, index: i32, memory: ^ANeuralNetworksMemory, offset: uint, length: uint) -> NNResultCode ---
*
* Sets an operand to a value stored in a memory object.
*
* The content of the memory is not copied. A reference to that memory is stored
* inside the model. The application must not change the content of the memory
* region until all executions using this model have completed. As the data may
* be copied during processing, modifying the data after this call yields
* undefined results.
*
* <p>The provided memory must outlive this model.</p>
*
* To indicate that an optional operand should be considered missing,
* use {@link ANeuralNetworksModel_setOperandValue} instead, passing nullptr for buffer.
*
* It is disallowed to set an operand value with shared memory backed by an AHardwareBuffer
* of a format other than AHARDWAREBUFFER_FORMAT_BLOB.
*
* It is disallowed to set an operand value with memory created from
* {@link ANeuralNetworksMemory_createFromDesc}.
*
* Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been
* called will return an error.
*
* See {@link ANeuralNetworksModel} for information on multithreaded usage.
* See {@link ANeuralNetworksMemory_createFromAHardwareBuffer} for information on
* AHardwareBuffer usage.
*
* Available since NNAPI feature level 1.
* Available since API level 27.
*
* @param model The model to be modified.
* @param index The index of the model operand we're setting.
* @param memory The memory containing the data.
* @param offset This specifies the location of the data within the memory.
* The offset is in bytes from the start of memory.
* @param length The size in bytes of the data value.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworksModel_setOperandValueFromModel ¶
ANeuralNetworksModel_setOperandValueFromModel :: proc "c" (model: ^ANeuralNetworksModel, index: i32, value: ^ANeuralNetworksModel) -> NNResultCode ---
*
* Sets an operand to a value that is a reference to another NNAPI model.
*
* The referenced model must already have been finished by a call to
* {@link ANeuralNetworksModel_finish}.
*
* The {@link ANeuralNetworksModel_relaxComputationFloat32toFloat16} setting of
* referenced models is overridden by that setting of the main model of a
* compilation.
*
* The referenced model must outlive the model referring to it.
*
* Attempting to modify a model once {@link ANeuralNetworksModel_finish} has
* been called will return an error.
*
* See {@link ANeuralNetworksModel} for information on multithreaded usage.
*
* Available since NNAPI feature level 4.
* Available since API level 30.
*
* @param model The model to be modified.
* @param index The index of the model operand we're setting.
* @param value The model to be referenced.
*
* @return ANEURALNETWORKS_NO_ERROR if successful.
ANeuralNetworks_getDefaultLoopTimeout ¶
ANeuralNetworks_getDefaultLoopTimeout :: proc "c" () -> u64 ---
*
* Get the default timeout value for WHILE loops. * * @return The default timeout value in nanoseconds. * * Available since NNAPI feature level 4. * Available since API level 30.
ANeuralNetworks_getDevice ¶
ANeuralNetworks_getDevice :: proc "c" (devIndex: u32, device: ^^ANeuralNetworksDevice) -> NNResultCode ---
*
* Get the representation of the specified device. * * @param devIndex The index of the specified device. Must be less than the number of available devices. * @param device The representation of the specified device. * The same representation will always be returned for the specified * device. * * @return ANEURALNETWORKS_NO_ERROR if successful. * * Available since NNAPI feature level 3. * Available since API level 29.
ANeuralNetworks_getDeviceCount ¶
ANeuralNetworks_getDeviceCount :: proc "c" (numDevices: ^u32) -> NNResultCode ---
*
* Get the number of available devices. * * @param numDevices Used to return the number of devices. * * @return ANEURALNETWORKS_NO_ERROR if successful. * * Available since NNAPI feature level 3. * Available since API level 29.
ANeuralNetworks_getMaximumLoopTimeout ¶
ANeuralNetworks_getMaximumLoopTimeout :: proc "c" () -> u64 ---
*
* Get the maximum timeout value for WHILE loops. * * @return The maximum timeout value in nanoseconds. * * Available since NNAPI feature level 4. * Available since API level 30.
ANeuralNetworks_getRuntimeFeatureLevel ¶
ANeuralNetworks_getRuntimeFeatureLevel :: proc "c" () -> FeatureLevelCode ---
*
* Get the NNAPI runtime feature level.
*
* Since API level 31 (NNAPI feature level 5), the NNAPI runtime (libneuralnetworks.so) and its
* API specification can be updated between Android API releases.
*
* On Android devices with API level 31 and newer, for NNAPI runtime feature discovery,
* the NNAPI runtime feature level must be used instead of the Android device API level.
*
* On Android devices with API level 30 and older, the Android API level of the Android
* device must be used for NNAPI runtime feature discovery. Enum values in
* {@link FeatureLevelCode} from feature level 1 to 5 have their corresponding Android
* API levels listed in their documentation, and each such enum value equals the corresponding
* API level. This allows using the Android API level as the feature level.
* This mapping between enum value and Android API level does not exist for feature levels
* after NNAPI feature level 5 and API levels after S (31).
*
* Example usage:
* int device_api_level = android_get_device_api_level()
* int64_t runtime_feature_level = (device_api_level < __ANDROID_API_S__) ?
* device_api_level : ANeuralNetworks_getRuntimeFeatureLevel()
*
* Runtime feature level is closely related to NNAPI device feature level
* ({@link ANeuralNetworksDevice_getFeatureLevel}), which indicates an NNAPI device feature level
* (the most advanced NNAPI specification and features that the driver implements).
* This function expresses NNAPI runtime feature level, which indicates the most advanced
* NNAPI specification and features the runtime implements. An NNAPI device feature level is
* always less than or equal to the runtime feature level.
*
* This function returns a {@link FeatureLevelCode} enum value,
* which is the NNAPI specification version that this NNAPI runtime implements.
* It is NOT an Android API level.
*
* Available since NNAPI feature level 5.
* Available since API level 31.
AObbInfo_delete ¶
AObbInfo_delete :: proc "c" (obbInfo: ^AObbInfo) ---
*
* Destroy the AObbInfo object. You must call this when finished with the object.
AObbInfo_getFlags ¶
*
* Get the flags of an OBB file.
AObbInfo_getPackageName ¶
*
* Get the package name for the OBB.
AObbInfo_getVersion ¶
*
* Get the version of an OBB file.
AObbScanner_getObbInfo ¶
*
* Scan an OBB and get information about it.
APerformanceHint_closeSession ¶
APerformanceHint_closeSession :: proc "c" (session: ^APerformanceHintSession) ---
*
* Release the performance hint manager pointer acquired via
* {@link APerformanceHint_createSession}.
*
* Available since API level 33.
*
* @param session The performance hint session instance to release.
APerformanceHint_createSession ¶
APerformanceHint_createSession :: proc "c" (manager: ^APerformanceHintManager, threadIds: [^]i32, size: uint, initialTargetWorkDurationNanos: i64) -> ^APerformanceHintSession ---
*
* Creates a session for the given set of threads and sets their initial target work * duration. * * Available since API level 33. * * @param manager The performance hint manager instance. * @param threadIds The list of threads to be associated with this session. They must be part of * this app's thread group. * @param size the size of threadIds. * @param initialTargetWorkDurationNanos The desired duration in nanoseconds for the new session. * This must be positive. * @return manager instance on success, nullptr on failure.
APerformanceHint_getManager ¶
APerformanceHint_getManager :: proc "c" () -> ^APerformanceHintManager ---
*
* Acquire an instance of the performance hint manager. * * Available since API level 33. * * @return manager instance on success, nullptr on failure.
APerformanceHint_getPreferredUpdateRateNanos ¶
APerformanceHint_getPreferredUpdateRateNanos :: proc "c" (manager: ^APerformanceHintManager) -> i64 ---
*
* Get preferred update rate information for this device. * * Available since API level 33. * * @param manager The performance hint manager instance. * @return the preferred update rate supported by device software.
APerformanceHint_reportActualWorkDuration ¶
APerformanceHint_reportActualWorkDuration :: proc "c" (session: ^APerformanceHintSession, actualDurationNanos: i64) -> i32 ---
*
* Reports the actual duration for the last cycle of work. * * <p>The system will attempt to adjust the core placement of the threads within the thread * group and/or the frequency of the core on which they are run to bring the actual duration * close to the target duration.</p> * * Available since API level 33. * * @param session The performance hint session instance to update. * @param actualDurationNanos how long the thread group took to complete its last task in * nanoseconds. This must be positive. * @return 0 on success * EINVAL if actualDurationNanos is not positive. * EPIPE if communication with the system service has failed.
APerformanceHint_reportActualWorkDuration2 ¶
APerformanceHint_reportActualWorkDuration2 :: proc "c" (session: ^APerformanceHintSession, workDuration: ^AWorkDuration) -> i32 ---
*
* Reports the durations for the last cycle of work.
*
* The system will attempt to adjust the scheduling and performance of the
* threads within the thread group to bring the actual duration close to the target duration.
*
* Available since API level 35.
*
* @param session The {@link APerformanceHintSession} instance to update.
* @param workDuration The {@link AWorkDuration} structure of times the thread group took to
* complete its last task in nanoseconds breaking down into different components.
*
* The work period start timestamp and actual total duration must be greater than zero.
*
* The actual CPU and GPU durations must be greater than or equal to zero, and at least one
* of them must be greater than zero. When one of them is equal to zero, it means that type
* of work was not measured for this workload.
*
* @return 0 on success.
* EINVAL if any duration is an invalid number.
* EPIPE if communication with the system service has failed.
APerformanceHint_setPreferPowerEfficiency ¶
APerformanceHint_setPreferPowerEfficiency :: proc "c" (session: ^APerformanceHintSession, enabled: bool) -> i32 ---
*
* This tells the session that these threads can be * safely scheduled to prefer power efficiency over performance. * * Available since API level 35. * * @param session The performance hint session instance to update. * @param enabled The flag which sets whether this session will use power-efficient scheduling. * @return 0 on success. * EPIPE if communication with the system service has failed.
APerformanceHint_setThreads ¶
APerformanceHint_setThreads :: proc "c" (session: ^APerformanceHintSession, threadIds: [^]i32, size: uint) -> i32 ---
*
* Set a list of threads to the performance hint session. This operation will replace * the current list of threads with the given list of threads. * * Available since API level 34. * * @param session The performance hint session instance to update. * @param threadIds The list of threads to be associated with this session. They must be part of * this app's thread group. * @param size The size of the list of threadIds. * @return 0 on success. * EINVAL if the list of thread ids is empty or if any of the thread ids are not part of * the thread group. * EPIPE if communication with the system service has failed. * EPERM if any thread id doesn't belong to the application.
APerformanceHint_updateTargetWorkDuration ¶
APerformanceHint_updateTargetWorkDuration :: proc "c" (session: ^APerformanceHintSession, targetDurationNanos: i64) -> i32 ---
*
* Updates this session's target duration for each cycle of work. * * Available since API level 33. * * @param session The performance hint session instance to update. * @param targetDurationNanos the new desired duration in nanoseconds. This must be positive. * @return 0 on success * EINVAL if targetDurationNanos is not positive. * EPIPE if communication with the system service has failed.
APermissionManager_checkPermission ¶
APermissionManager_checkPermission :: proc "c" (permission: cstring, pid: i32, uid: u32, outResult: ^PermissionManagerResult) -> PermissionManagerStatus ---
*
* Checks whether the package with the given pid/uid has been granted a permission. * * Note that the Java API of Context#checkPermission() is usually faster due to caching, * thus is preferred over this API wherever possible. * * Available since API level 31 * * @param permission the permission to be checked. * @param pid the process id of the package to be checked. * @param uid the uid of the package to be checked. * @param outResult output of the permission check result. * * @return error codes if any error happened during the check.
ASensorEventQueue_disableSensor ¶
ASensorEventQueue_disableSensor :: proc "c" (queue: ^ASensorEventQueue, sensor: ^ASensor) -> i32 ---
*
* Disable the selected sensor.
*
* Stop event reports from the sensor to specified sensor event queue.
*
* \param queue {@link ASensorEventQueue} to be changed
* \param sensor {@link ASensor} to be disabled
* \return 0 on success or a negative error code on failure.
ASensorEventQueue_enableSensor ¶
ASensorEventQueue_enableSensor :: proc "c" (queue: ^ASensorEventQueue, sensor: ^ASensor) -> i32 ---
*
* Enable the selected sensor at default sampling rate.
*
* Start event reports of a sensor to specified sensor event queue at a default rate.
*
* \param queue {@link ASensorEventQueue} for sensor event to be report to.
* \param sensor {@link ASensor} to be enabled.
*
* \return 0 on success or a negative error code on failure.
ASensorEventQueue_getEvents ¶
ASensorEventQueue_getEvents :: proc "c" (queue: ^ASensorEventQueue, events: [^]ASensorEvent, count: uint) -> int ---
*
* Retrieve pending events in sensor event queue
*
* Retrieve next available events from the queue to a specified event array.
*
* \param queue {@link ASensorEventQueue} to get events from
* \param events pointer to an array of {@link ASensorEvent}.
* \param count max number of event that can be filled into array event.
* \return number of events returned on success negative error code when
* no events are pending or an error has occurred.
*
* Examples:
*
* ASensorEvent event
* ssize_t numEvent = ASensorEventQueue_getEvents(queue, &event, 1)
*
* ASensorEvent eventBuffer[8]
* ssize_t numEvent = ASensorEventQueue_getEvents(queue, eventBuffer, 8)
*
ASensorEventQueue_hasEvents ¶
ASensorEventQueue_hasEvents :: proc "c" (queue: ^ASensorEventQueue) -> i32 ---
*
* Determine if a sensor event queue has pending event to be processed.
*
* \param queue {@link ASensorEventQueue} to be queried
* \return 1 if the queue has events 0 if it does not have events
* or a negative value if there is an error.
ASensorEventQueue_registerSensor ¶
ASensorEventQueue_registerSensor :: proc "c" (queue: ^ASensorEventQueue, sensor: ^ASensor, samplingPeriodUs: i32, maxBatchReportLatencyUs: i64) -> i32 ---
*
* Enable the selected sensor with sampling and report parameters
*
* Enable the selected sensor at a specified sampling period and max batch report latency.
* To disable sensor, use {@link ASensorEventQueue_disableSensor}.
*
* \param queue {@link ASensorEventQueue} for sensor event to be report to.
* \param sensor {@link ASensor} to be enabled.
* \param samplingPeriodUs sampling period of sensor in microseconds.
* \param maxBatchReportLatencyUs maximum time interval between two batches of sensor events are
* delievered in microseconds. For sensor streaming, set to 0.
* \return 0 on success or a negative error code on failure.
ASensorEventQueue_requestAdditionalInfoEvents ¶
ASensorEventQueue_requestAdditionalInfoEvents :: proc "c" (queue: ^ASensorEventQueue, enable: bool) -> i32 ---
*
* Request that {@link ASENSOR_TYPE_ADDITIONAL_INFO} events to be delivered on
* the given {@link ASensorEventQueue}.
*
* Sensor data events are always delivered to the {@link ASensorEventQueue}.
*
* The {@link ASENSOR_TYPE_ADDITIONAL_INFO} events will be returned through
* {@link ASensorEventQueue_getEvents}. The client is responsible for checking
* {@link ASensorEvent#type} to determine the event type prior to handling of
* the event.
*
* The client must be tolerant of any value for
* {@link AAdditionalInfoEvent#type}, as new values may be defined in the future
* and may delivered to the client.
*
* Available since API level 29.
*
* \param queue {@link ASensorEventQueue} to configure
* \param enable true to request {@link ASENSOR_TYPE_ADDITIONAL_INFO} events,
* false to stop receiving events
* \return 0 on success or a negative error code on failure
ASensorEventQueue_setEventRate ¶
ASensorEventQueue_setEventRate :: proc "c" (queue: ^ASensorEventQueue, sensor: ^ASensor, usec: i32) -> i32 ---
*
* Sets the delivery rate of events in microseconds for the given sensor.
*
* This function has to be called after {@link ASensorEventQueue_enableSensor}.
* Note that this is a hint only, generally event will arrive at a higher
* rate. It is an error to set a rate inferior to the value returned by
* ASensor_getMinDelay().
*
* \param queue {@link ASensorEventQueue} to which sensor event is delivered.
* \param sensor {@link ASensor} of which sampling rate to be updated.
* \param usec sensor sampling period (1/sampling rate) in microseconds
* \return 0 on sucess or a negative error code on failure.
ASensorManager_configureDirectReport ¶
ASensorManager_configureDirectReport :: proc "c" (manager: ^ASensorManager, sensor: ^ASensor, channelId: i32, rate: SensorDirectReportRate) -> i32 ---
*
* Configure direct report on channel
*
* Configure sensor direct report on a direct channel: set rate to value other than
* {@link ASENSOR_DIRECT_RATE_STOP} so that sensor event can be directly
* written into the shared memory region used for creating the buffer. It returns a positive token
* which can be used for identify sensor events from different sensors on success. Calling with rate
* {@link ASENSOR_DIRECT_RATE_STOP} will stop direct report of the sensor specified in the channel.
*
* To stop all active sensor direct report configured to a channel, set sensor to NULL and rate to
* {@link ASENSOR_DIRECT_RATE_STOP}.
*
* In order to successfully configure a direct report, the sensor has to support the specified rate
* and the channel type, which can be checked by {@link ASensor_getHighestDirectReportRateLevel} and
* {@link ASensor_isDirectChannelTypeSupported}, respectively.
*
* Example:
*
* ASensorManager *manager = ...
* ASensor *sensor = ...
* int channelId = ...
*
* ASensorManager_configureDirectReport(manager, sensor, channel_id, ASENSOR_DIRECT_RATE_FAST)
*
* Available since API level 26.
*
* \param manager the {@link ASensorManager} instance obtained from
* {@link ASensorManager_getInstanceForPackage}.
* \param sensor a {@link ASensor} to denote which sensor to be operate. It can be NULL if rate
* is {@link ASENSOR_DIRECT_RATE_STOP}, denoting stopping of all active sensor
* direct report.
* \param channelId channel id (a positive integer) returned from
* {@link ASensorManager_createSharedMemoryDirectChannel} or
* {@link ASensorManager_createHardwareBufferDirectChannel}.
* \param rate one of predefined ASENSOR_DIRECT_RATE_... that is supported by the sensor.
* \return positive token for success or negative error code.
ASensorManager_createEventQueue ¶
ASensorManager_createEventQueue :: proc "c" (manager: ^ASensorManager, looper: ^ALooper, ident: i32, callback: ALooper_callbackFunc, data: rawptr) -> ^ASensorEventQueue ---
*
* Creates a new sensor event queue and associate it with a looper. * * "ident" is a identifier for the events that will be returned when * calling ALooper_pollOnce(). The identifier must be >= 0, or * ALOOPER_POLL_CALLBACK if providing a non-NULL callback.
ASensorManager_createHardwareBufferDirectChannel ¶
ASensorManager_createHardwareBufferDirectChannel :: proc "c" (manager: ^ASensorManager, buffer: ^AHardwareBuffer, size: uint) -> i32 ---
*
* Create direct channel based on AHardwareBuffer
*
* Create a direct channel of {@link ASENSOR_DIRECT_CHANNEL_TYPE_HARDWARE_BUFFER} type to be used
* for configuring sensor direct report.
*
* Available since API level 26.
*
* \param manager the {@link ASensorManager} instance obtained from
* {@link ASensorManager_getInstanceForPackage}.
* \param buffer {@link AHardwareBuffer} instance created by {@link AHardwareBuffer_allocate}.
* \param size the intended size to be used, must be less or equal to size of buffer.
*
* \return a positive integer as a channel id to be used in
* {@link ASensorManager_destroyDirectChannel} and
* {@link ASensorManager_configureDirectReport}, or value less or equal to 0 for failures.
ASensorManager_createSharedMemoryDirectChannel ¶
ASensorManager_createSharedMemoryDirectChannel :: proc "c" (manager: ^ASensorManager, fd: i32, size: uint) -> i32 ---
*
* Create direct channel based on shared memory
*
* Create a direct channel of {@link ASENSOR_DIRECT_CHANNEL_TYPE_SHARED_MEMORY} to be used
* for configuring sensor direct report.
*
* Available since API level 26.
*
* \param manager the {@link ASensorManager} instance obtained from
* {@link ASensorManager_getInstanceForPackage}.
* \param fd file descriptor representing a shared memory created by
* {@link ASharedMemory_create}
* \param size size to be used, must be less or equal to size of shared memory.
*
* \return a positive integer as a channel id to be used in
* {@link ASensorManager_destroyDirectChannel} and
* {@link ASensorManager_configureDirectReport}, or value less or equal to 0 for failures.
ASensorManager_destroyDirectChannel ¶
ASensorManager_destroyDirectChannel :: proc "c" (manager: ^ASensorManager, channelId: i32) ---
*
* Destroy a direct channel
*
* Destroy a direct channel previously created by using one of
* ASensorManager_create*DirectChannel() derivative functions.
* Note that the buffer used for creating the direct channel does not get destroyed with
* ASensorManager_destroyDirectChannel and has to be closed or released separately.
*
* Available since API level 26.
*
* \param manager the {@link ASensorManager} instance obtained from
* {@link ASensorManager_getInstanceForPackage}.
* \param channelId channel id (a positive integer) returned from
* {@link ASensorManager_createSharedMemoryDirectChannel} or
* {@link ASensorManager_createHardwareBufferDirectChannel}.
ASensorManager_destroyEventQueue ¶
ASensorManager_destroyEventQueue :: proc "c" (manager: ^ASensorManager, queue: ^ASensorEventQueue) -> i32 ---
*
* Destroys the event queue and free all resources associated to it.
ASensorManager_getDefaultSensor ¶
ASensorManager_getDefaultSensor :: proc "c" (manager: ^ASensorManager, type: SensorType) -> ^ASensor ---
*
* Returns the default sensor for the given type, or NULL if no sensor * of that type exists.
ASensorManager_getDefaultSensorEx ¶
ASensorManager_getDefaultSensorEx :: proc "c" (manager: ^ASensorManager, type: SensorType, wakeUp: bool) -> ^ASensor ---
*
* Returns the default sensor with the given type and wakeUp properties or NULL if no sensor * of this type and wakeUp properties exists. * * Available since API level 21.
ASensorManager_getDynamicSensorList ¶
ASensorManager_getDynamicSensorList :: proc "c" (manager: ^ASensorManager, list: ^[^]^ASensor) -> int ---
*
* Returns the list of available dynamic sensors. If there are no dynamic
* sensors available, returns nullptr in list.
*
* Each time this is called, the previously returned list is deallocated and
* must no longer be used.
*
* Clients should call this if they receive a sensor update from
* {@link ASENSOR_TYPE_DYNAMIC_SENSOR_META} indicating the sensors have changed.
* If this happens, previously received lists from this method will be stale.
*
* Available since API level 33.
*
* \param manager the {@link ASensorManager} instance obtained from
* {@link ASensorManager_getInstanceForPackage}.
* \param list the returned list of dynamic sensors.
* \return positive number of returned sensors or negative error code.
* BAD_VALUE: manager is NULL.
ASensorManager_getInstance ¶
ASensorManager_getInstance :: proc "c" () -> ^ASensorManager ---
*
* Get a reference to the sensor manager. ASensorManager is a singleton * per package as different packages may have access to different sensors. * * Deprecated: Use ASensorManager_getInstanceForPackage(const char*) instead. * Deprecated since API level 26 * * Example: * * ASensorManager* sensorManager = ASensorManager_getInstance() *
ASensorManager_getInstanceForPackage ¶
ASensorManager_getInstanceForPackage :: proc "c" (packageName: cstring) -> ^ASensorManager ---
*
* Get a reference to the sensor manager. ASensorManager is a singleton
* per package as different packages may have access to different sensors.
*
* Example:
*
* ASensorManager* sensorManager = ASensorManager_getInstanceForPackage("foo.bar.baz")
*
* Available since API level 26.
ASensorManager_getSensorList ¶
ASensorManager_getSensorList :: proc "c" (manager: ^ASensorManager, list: ^[^]^ASensor) -> i32 ---
*
* Returns the list of available sensors. The returned list is owned by the
* sensor manager and will not change between calls to this function.
*
* \param manager the {@link ASensorManager} instance obtained from
* {@link ASensorManager_getInstanceForPackage}.
* \param list the returned list of sensors.
* \return positive number of returned sensors or negative error code.
* BAD_VALUE: manager is NULL.
ASensor_getFifoMaxEventCount ¶
*
* Returns the maximum size of batches for this sensor. Batches will often be * smaller, as the hardware fifo might be used for other sensors. * * Available since API level 21.
ASensor_getFifoReservedEventCount ¶
*
* Returns the hardware batch fifo size reserved to this sensor. * * Available since API level 21.
ASensor_getHandle ¶
*
* Returns the sensor's handle.
*
* The handle identifies the sensor within the system and is included in the
* sensor field of {@link ASensorEvent}, including those sent with type
* {@link ASENSOR_TYPE_ADDITIONAL_INFO}.
*
* A sensor's handle is able to be used to map {@link ASENSOR_TYPE_ADDITIONAL_INFO} events to the
* sensor that generated the event.
*
* It is important to note that the value returned by {@link ASensor_getHandle} is not the same as
* the value returned by the Java API <a href="/reference/android/hardware/Sensor#getId()">
* android.hardware.Sensor's getId()</a> and no mapping exists between the values.
*
* Available since API level 29.
ASensor_getHighestDirectReportRateLevel ¶
ASensor_getHighestDirectReportRateLevel :: proc "c" (sensor: ^ASensor) -> SensorDirectReportRate ---
*
* Get the highest direct rate level that a sensor supports.
*
* Available since API level 26.
*
* \param sensor a {@link ASensor} to denote the sensor to be checked.
*
* \return a ASENSOR_DIRECT_RATE_... enum denoting the highest rate level supported by the sensor.
* If return value is {@link ASENSOR_DIRECT_RATE_STOP}, it means the sensor
* does not support direct report.
ASensor_getMinDelay ¶
*
* Returns the minimum delay allowed between events in microseconds. * A value of zero means that this sensor doesn't report events at a * constant rate, but rather only when a new data is available.
ASensor_getName ¶
*
* Returns this sensor's name (non localized)
ASensor_getReportingMode ¶
ASensor_getReportingMode :: proc "c" (sensor: ^ASensor) -> SensorReportingMode ---
*
* Returns the reporting mode for this sensor. One of AREPORTING_MODE_* constants. * * Available since API level 21.
ASensor_getResolution ¶
*
* Returns this sensors's resolution
ASensor_getStringType ¶
*
* Returns this sensor's string type. * * Available since API level 21.
ASensor_getType ¶
ASensor_getType :: proc "c" (sensor: ^ASensor) -> SensorType ---
*
* Return this sensor's type
ASensor_getVendor ¶
*
* Returns this sensor's vendor's name (non localized)
ASensor_isDirectChannelTypeSupported ¶
ASensor_isDirectChannelTypeSupported :: proc "c" (sensor: ^ASensor, channelType: SensorDirectChannelType) -> bool ---
*
* Test if sensor supports a certain type of direct channel.
*
* Available since API level 26.
*
* \param sensor a {@link ASensor} to denote the sensor to be checked.
* \param channelType Channel type constant, either
* {@link ASENSOR_DIRECT_CHANNEL_TYPE_SHARED_MEMORY}
* or {@link ASENSOR_DIRECT_CHANNEL_TYPE_HARDWARE_BUFFER}.
* \returns true if sensor supports the specified direct channel type.
ASensor_isWakeUpSensor ¶
*
* Returns true if this is a wake up sensor, false otherwise. * * Available since API level 21.
ASharedMemory_create ¶
*
* Create a shared memory region.
*
* Create shared memory region and returns an file descriptor. The resulting file descriptor can be
* mmap'ed to process memory space with PROT_READ | PROT_WRITE | PROT_EXEC. Access to shared memory
* region can be restricted with {@link ASharedMemory_setProt}.
*
* Use close() to release the shared memory region.
*
* Use <a href="/reference/android/os/ParcelFileDescriptor">android.os.ParcelFileDescriptor</a>
* to pass the file descriptor to another process. File descriptors may also be sent to other
* processes over a Unix domain socket with sendmsg and SCM_RIGHTS. See sendmsg(3) and
* cmsg(3) man pages for more information.
*
* If you intend to share this file descriptor with a child process after
* calling exec(3), note that you will need to use fcntl(2) with FD_SETFD
* to clear the FD_CLOEXEC flag for this to work on all versions of Android.
*
* Available since API level 26.
*
* \param name an optional name.
* \param size size of the shared memory region
* \return file descriptor that denotes the shared memory
* -1 and sets errno on failure, or -EINVAL if the error is that size was 0.
ASharedMemory_dupFromJava ¶
ASharedMemory_dupFromJava :: proc "c" (env: ^^JNINativeInterface, sharedMemory: jobject) -> i32 ---
*
* Returns a dup'd FD from the given Java android.os.SharedMemory object. The returned file * descriptor has all the same properties & capabilities as the FD returned from * ASharedMemory_create(), however the protection flags will be the same as those of the * android.os.SharedMemory object. * * Use close() to release the shared memory region. * * Available since API level 27. * * \param env The JNIEnv* pointer * \param sharedMemory The Java android.os.SharedMemory object * \return file descriptor that denotes the shared memory; -1 if the shared memory object is * already closed, if the JNIEnv or jobject is NULL, or if there are too many open file * descriptors (errno=EMFILE)
ASharedMemory_getSize ¶
*
* Get the size of the shared memory region. * * Available since API level 26. * * \param fd file descriptor of the shared memory region * \return size in bytes 0 if fd is not a valid shared memory file descriptor.
ASharedMemory_setProt ¶
*
* Restrict access of shared memory region.
*
* This function restricts access of a shared memory region. Access can only be removed. The effect
* applies globally to all file descriptors in all processes across the system that refer to this
* shared memory region. Existing memory mapped regions are not affected.
*
* It is a common use case to create a shared memory region, map it read/write locally to intialize
* content, and then send the shared memory to another process with read only access. Code example
* as below (error handling omited).
*
*
* int fd = ASharedMemory_create("memory", 128)
*
* // By default it has PROT_READ | PROT_WRITE | PROT_EXEC.
* size_t memSize = ASharedMemory_getSize(fd)
* char *buffer = (char *) mmap(NULL, memSize, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0)
*
* strcpy(buffer, "This is an example.") // trivially initialize content
*
* // limit access to read only
* ASharedMemory_setProt(fd, PROT_READ)
*
* // share fd with another process here and the other process can only map with PROT_READ.
*
* Available since API level 26.
*
* \param fd file descriptor of the shared memory region.
* \param prot any bitwise-or'ed combination of PROT_READ, PROT_WRITE, PROT_EXEC denoting
* updated access. Note access can only be removed, but not added back.
* \return 0 for success, -1 and sets errno on failure.
AStorageManager_delete ¶
AStorageManager_delete :: proc "c" (mgr: ^AStorageManager) ---
*
* Release AStorageManager instance.
AStorageManager_getMountedObbPath ¶
AStorageManager_getMountedObbPath :: proc "c" (mgr: ^AStorageManager, filename: cstring) -> cstring ---
*
* Get the mounted path for an OBB.
AStorageManager_isObbMounted ¶
AStorageManager_isObbMounted :: proc "c" (mgr: ^AStorageManager, filename: cstring) -> i32 ---
*
* Check whether an OBB is mounted.
AStorageManager_mountObb ¶
AStorageManager_mountObb :: proc "c" (mgr: ^AStorageManager, filename: cstring, key: cstring, cb: AStorageManager_obbCallbackFunc, data: rawptr) ---
*
* Attempts to mount an OBB file. This is an asynchronous operation.
*
* Since API level 33, this function can only be used to mount unencrypted OBBs,
* i.e. the {@code key} parameter must be {@code null} or an empty string. Note
* that even before API level 33, mounting encrypted OBBs didn't work on many
* Android device implementations. Applications should not assume any particular
* behavior when {@code key} is nonempty.
AStorageManager_new ¶
AStorageManager_new :: proc "c" () -> ^AStorageManager ---
*
* Obtains a new instance of AStorageManager.
AStorageManager_unmountObb ¶
AStorageManager_unmountObb :: proc "c" (mgr: ^AStorageManager, filename: cstring, force: i32, cb: AStorageManager_obbCallbackFunc, data: rawptr) ---
*
* Attempts to unmount an OBB file. This is an asynchronous operation.
ASurfaceControl_acquire ¶
ASurfaceControl_acquire :: proc "c" (surface_control: ^ASurfaceControl) ---
*
* Acquires a reference on the given ASurfaceControl object. This prevents the object * from being deleted until the reference is removed. * * To release the reference, use the ASurfaceControl_release function. * * Available since API level 31.
ASurfaceControl_create ¶
ASurfaceControl_create :: proc "c" (parent: ^ASurfaceControl, debug_name: cstring) -> ^ASurfaceControl ---
*
* See ASurfaceControl_createFromWindow. * * Available since API level 29.
ASurfaceControl_createFromWindow ¶
ASurfaceControl_createFromWindow :: proc "c" (parent: ^ANativeWindow, debug_name: cstring) -> ^ASurfaceControl ---
*
* Creates an ASurfaceControl with either ANativeWindow or an ASurfaceControl as its parent. * \a debug_name is a debug name associated with this surface. It can be used to * identify this surface in the SurfaceFlinger's layer tree. It must not be * null. * * The caller takes ownership of the ASurfaceControl returned and must release it * using ASurfaceControl_release below. * * Available since API level 29.
ASurfaceControl_release ¶
ASurfaceControl_release :: proc "c" (surface_control: ^ASurfaceControl) ---
*
* Removes a reference that was previously acquired with one of the following functions: * ASurfaceControl_createFromWindow * ASurfaceControl_create * ANativeWindow_acquire * The surface and its children may remain on display as long as their parent remains on display. * * Available since API level 29.
ASurfaceTexture_acquireANativeWindow ¶
ASurfaceTexture_acquireANativeWindow :: proc "c" (st: ^ASurfaceTexture) -> ^ANativeWindow ---
*
* Returns a reference to an ANativeWindow (i.e. the Producer) for this SurfaceTexture. * This is equivalent to Java's: Surface sur = new Surface(surfaceTexture) * * Available since API level 28. * * \param st A ASurfaceTexture reference acquired with ASurfaceTexture_fromSurfaceTexture() * @return A reference to an ANativeWindow. This reference MUST BE released when no longer needed * using ANativeWindow_release(). Failing to do so will result in leaked resources. nullptr is * returned if \p st is null or if it's not an instance of android.graphics.SurfaceTexture
ASurfaceTexture_attachToGLContext ¶
ASurfaceTexture_attachToGLContext :: proc "c" (st: ^ASurfaceTexture, texName: u32) -> i32 ---
*
* Attach the SurfaceTexture to the OpenGL ES context that is current on the calling thread. A
* new OpenGL ES texture object is created and populated with the SurfaceTexture image frame
* that was current at the time of the last call to {@link ASurfaceTexture_detachFromGLContext}.
* This new texture is bound to the GL_TEXTURE_EXTERNAL_OES texture target.
*
* This can be used to access the SurfaceTexture image contents from multiple OpenGL ES
* contexts. Note, however, that the image contents are only accessible from one OpenGL ES
* context at a time.
*
* Available since API level 28.
*
* \param st A ASurfaceTexture reference acquired with ASurfaceTexture_fromSurfaceTexture()
* \param texName The name of the OpenGL ES texture that will be created. This texture name
* must be unusued in the OpenGL ES context that is current on the calling thread.
* \return 0 on success, negative posix error code otherwise (see <errno.h>)
ASurfaceTexture_detachFromGLContext ¶
ASurfaceTexture_detachFromGLContext :: proc "c" (st: ^ASurfaceTexture) -> i32 ---
*
* Detach the SurfaceTexture from the OpenGL ES context that owns the OpenGL ES texture object.
* This call must be made with the OpenGL ES context current on the calling thread. The OpenGL
* ES texture object will be deleted as a result of this call. After calling this method all
* calls to {@link ASurfaceTexture_updateTexImage} will fail until a successful call to
* {@link ASurfaceTexture_attachToGLContext} is made.
*
* This can be used to access the SurfaceTexture image contents from multiple OpenGL ES
* contexts. Note, however, that the image contents are only accessible from one OpenGL ES
* context at a time.
*
* Available since API level 28.
*
* \param st A ASurfaceTexture reference acquired with ASurfaceTexture_fromSurfaceTexture()
* \return 0 on success, negative posix error code otherwise (see <errno.h>)
ASurfaceTexture_fromSurfaceTexture ¶
ASurfaceTexture_fromSurfaceTexture :: proc "c" (env: ^^JNINativeInterface, surfacetexture: jobject) -> ^ASurfaceTexture ---
*
* Get a reference to the native ASurfaceTexture from the corresponding java object. * * The caller must keep a reference to the Java SurfaceTexture during the lifetime of the returned * ASurfaceTexture. Failing to do so could result in the ASurfaceTexture to stop functioning * properly once the Java object gets finalized. * However, this will not result in program termination. * * Available since API level 28. * * \param env JNI environment * \param surfacetexture Instance of Java SurfaceTexture object * \return native ASurfaceTexture reference or nullptr if the java object is not a SurfaceTexture. * The returned reference MUST BE released when it's no longer needed using * ASurfaceTexture_release().
ASurfaceTexture_getTimestamp ¶
ASurfaceTexture_getTimestamp :: proc "c" (st: ^ASurfaceTexture) -> i64 ---
*
* Retrieve the timestamp associated with the texture image set by the most recent call to * updateTexImage. * * This timestamp is in nanoseconds, and is normally monotonically increasing. The timestamp * should be unaffected by time-of-day adjustments, and for a camera should be strictly * monotonic but for a MediaPlayer may be reset when the position is set. The * specific meaning and zero point of the timestamp depends on the source providing images to * the SurfaceTexture. Unless otherwise specified by the image source, timestamps cannot * generally be compared across SurfaceTexture instances, or across multiple program * invocations. It is mostly useful for determining time offsets between subsequent frames. * * For EGL/Vulkan producers, this timestamp is the desired present time set with the * EGL_ANDROID_presentation_time or VK_GOOGLE_display_timing extensions * * Available since API level 28. * * \param st A ASurfaceTexture reference acquired with ASurfaceTexture_fromSurfaceTexture()
ASurfaceTexture_getTransformMatrix ¶
ASurfaceTexture_getTransformMatrix :: proc "c" (st: ^ASurfaceTexture, mtx: [16]f32) ---
*
* Retrieve the 4x4 texture coordinate transform matrix associated with the texture image set by * the most recent call to updateTexImage. * * This transform matrix maps 2D homogeneous texture coordinates of the form (s, t, 0, 1) with s * and t in the inclusive range [0, 1] to the texture coordinate that should be used to sample * that location from the texture. Sampling the texture outside of the range of this transform * is undefined. * * The matrix is stored in column-major order so that it may be passed directly to OpenGL ES via * the glLoadMatrixf or glUniformMatrix4fv functions. * * Available since API level 28. * * \param st A ASurfaceTexture reference acquired with ASurfaceTexture_fromSurfaceTexture() * \param mtx the array into which the 4x4 matrix will be stored. The array must have exactly * 16 elements.
ASurfaceTexture_release ¶
ASurfaceTexture_release :: proc "c" (st: ^ASurfaceTexture) ---
*
* Release the reference to the native ASurfaceTexture acquired with * ASurfaceTexture_fromSurfaceTexture(). * Failing to do so will result in leaked memory and graphic resources. * * Available since API level 28. * * \param st A ASurfaceTexture reference acquired with ASurfaceTexture_fromSurfaceTexture()
ASurfaceTexture_updateTexImage ¶
ASurfaceTexture_updateTexImage :: proc "c" (st: ^ASurfaceTexture) -> i32 ---
*
* Update the texture image to the most recent frame from the image stream. This may only be * called while the OpenGL ES context that owns the texture is current on the calling thread. * It will implicitly bind its texture to the GL_TEXTURE_EXTERNAL_OES texture target. * * Available since API level 28. * * \param st A ASurfaceTexture reference acquired with ASurfaceTexture_fromSurfaceTexture() * \return 0 on success, negative posix error code otherwise (see <errno.h>)
ASurfaceTransactionStats_getASurfaceControls ¶
ASurfaceTransactionStats_getASurfaceControls :: proc "c" (surface_transaction_stats: ^ASurfaceTransactionStats, outASurfaceControls: ^^^ASurfaceControl, outASurfaceControlsSize: ^uint) ---
*
* \a outASurfaceControls returns an array of ASurfaceControl pointers that were updated during the * transaction. Stats for the surfaces can be queried through ASurfaceTransactionStats functions. * When the client is done using the array, it must release it by calling * ASurfaceTransactionStats_releaseASurfaceControls. * * Available since API level 29. * * \a outASurfaceControlsSize returns the size of the ASurfaceControls array.
ASurfaceTransactionStats_getAcquireTime ¶
ASurfaceTransactionStats_getAcquireTime :: proc "c" (surface_transaction_stats: ^ASurfaceTransactionStats, surface_control: ^ASurfaceControl) -> i64 ---
*
* Returns the timestamp of when the CURRENT buffer was acquired. A buffer is considered * acquired when its acquire_fence_fd has signaled. A buffer cannot be latched or presented until * it is acquired. If no acquire_fence_fd was provided, this timestamp will be set to -1. * * Available since API level 29.
ASurfaceTransactionStats_getLatchTime ¶
ASurfaceTransactionStats_getLatchTime :: proc "c" (surface_transaction_stats: ^ASurfaceTransactionStats) -> i64 ---
*
* Returns the timestamp of when the frame was latched by the framework. Once a frame is * latched by the framework, it is presented at the following hardware vsync. * * Available since API level 29.
ASurfaceTransactionStats_getPresentFenceFd ¶
ASurfaceTransactionStats_getPresentFenceFd :: proc "c" (surface_transaction_stats: ^ASurfaceTransactionStats) -> i32 ---
*
* Returns a sync fence that signals when the transaction has been presented. * The recipient of the callback takes ownership of the fence and is responsible for closing * it. If a device does not support present fences, a -1 will be returned. * * This query is not valid for ASurfaceTransaction_OnCommit callback. * * Available since API level 29.
ASurfaceTransactionStats_getPreviousReleaseFenceFd ¶
ASurfaceTransactionStats_getPreviousReleaseFenceFd :: proc "c" (surface_transaction_stats: ^ASurfaceTransactionStats, surface_control: ^ASurfaceControl) -> i32 ---
*
* The returns the fence used to signal the release of the PREVIOUS buffer set on * this surface. If this fence is valid (>=0), the PREVIOUS buffer has not yet been released and the * fence will signal when the PREVIOUS buffer has been released. If the fence is -1 , the PREVIOUS * buffer is already released. The recipient of the callback takes ownership of the * previousReleaseFenceFd and is responsible for closing it. * * Each time a buffer is set through ASurfaceTransaction_setBuffer() on a transaction * which is applied, the framework takes a ref on this buffer. The framework treats the * addition of a buffer to a particular surface as a unique ref. When a transaction updates or * removes a buffer from a surface, or removes the surface itself from the tree, this ref is * guaranteed to be released in the OnComplete callback for this transaction. The * ASurfaceControlStats provided in the callback for this surface may contain an optional fence * which must be signaled before the ref is assumed to be released. * * The client must ensure that all pending refs on a buffer are released before attempting to reuse * this buffer, otherwise synchronization errors may occur. * * This query is not valid for ASurfaceTransaction_OnCommit callback. * * Available since API level 29.
ASurfaceTransactionStats_releaseASurfaceControls ¶
ASurfaceTransactionStats_releaseASurfaceControls :: proc "c" (surface_controls: ^^ASurfaceControl) ---
*
* Releases the array of ASurfaceControls that were returned by * ASurfaceTransactionStats_getASurfaceControls(). * * Available since API level 29.
ASurfaceTransaction_apply ¶
ASurfaceTransaction_apply :: proc "c" (transaction: ^ASurfaceTransaction) ---
*
* Applies the updates accumulated in \a transaction. * * Note that the transaction is guaranteed to be applied atomically. The * transactions which are applied on the same thread are also guaranteed to be * applied in order. * * Available since API level 29.
ASurfaceTransaction_clearFrameRate ¶
ASurfaceTransaction_clearFrameRate :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl) ---
*
* Clears the frame rate which is set for \a surface_control. * * This is equivalent to calling * ASurfaceTransaction_setFrameRateWithChangeStrategy( * transaction, 0, compatibility, changeFrameRateStrategy). * * Usage of this API won't directly affect the application's frame production pipeline. However, * because the system may change the display refresh rate, calls to this function may result in * changes to Choreographer callback timings, and changes to the time interval at which the system * releases buffers back to the application. * * See ASurfaceTransaction_setFrameRateWithChangeStrategy() * * You can register for changes in the refresh rate using * \a AChoreographer_registerRefreshRateCallback. * * See ASurfaceTransaction_setFrameRateWithChangeStrategy(). * * Available since API level 34.
ASurfaceTransaction_create ¶
ASurfaceTransaction_create :: proc "c" () -> ^ASurfaceTransaction ---
*
* The caller takes ownership of the transaction and must release it using * ASurfaceTransaction_delete() below. * * Available since API level 29.
ASurfaceTransaction_delete ¶
ASurfaceTransaction_delete :: proc "c" (transaction: ^ASurfaceTransaction) ---
*
* Destroys the \a transaction object. * * Available since API level 29.
ASurfaceTransaction_reparent ¶
ASurfaceTransaction_reparent :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, new_parent: ^ASurfaceControl) ---
*
* Reparents the \a surface_control from its old parent to the \a new_parent surface control. * Any children of the reparented \a surface_control will remain children of the \a surface_control. * * The \a new_parent can be null. Surface controls with a null parent do not appear on the display. * * Available since API level 29.
ASurfaceTransaction_setBuffer ¶
ASurfaceTransaction_setBuffer :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, buffer: ^AHardwareBuffer, acquire_fence_fd: i32 = -1) ---
*
* Updates the AHardwareBuffer displayed for \a surface_control. If not -1, the * acquire_fence_fd should be a file descriptor that is signaled when all pending work * for the buffer is complete and the buffer can be safely read. * * The frameworks takes ownership of the \a acquire_fence_fd passed and is responsible * for closing it. * * Note that the buffer must be allocated with AHARDWAREBUFFER_USAGE_GPU_SAMPLED_IMAGE * as the surface control might be composited using the GPU. * * Starting with API level 36, prefer using \a ASurfaceTransaction_setBufferWithRelease to * set a buffer and a callback which will be invoked when the buffer is ready to be reused. * * Available since API level 29.
ASurfaceTransaction_setBufferAlpha ¶
ASurfaceTransaction_setBufferAlpha :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, alpha: f32) ---
*
* Sets the alpha for the buffer. It uses a premultiplied blending. * * The \a alpha must be between 0.0 and 1.0. * * Available since API level 29.
ASurfaceTransaction_setBufferDataSpace ¶
ASurfaceTransaction_setBufferDataSpace :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, data_space: ADataSpace) ---
*
* Sets the data space of the surface_control's buffers. * * If no data space is set, the surface control defaults to ADATASPACE_SRGB. * * Available since API level 29.
ASurfaceTransaction_setBufferTransform ¶
ASurfaceTransaction_setBufferTransform :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, transform: ANativeWindowTransform) ---
*
* \param transform The transform applied after the source rect is applied to the buffer. This * parameter should be set to 0 for no transform. To specify a transform use the * NATIVE_WINDOW_TRANSFORM_* enum. * * Available since API level 31.
ASurfaceTransaction_setBufferTransparency ¶
ASurfaceTransaction_setBufferTransparency :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, transparency: SurfaceTransactionTransparency) ---
*
* Updates whether the content for the buffer associated with this surface is * completely opaque. If true, every pixel of content inside the buffer must be * opaque or visual errors can occur. * * Available since API level 29.
ASurfaceTransaction_setBufferWithRelease ¶
ASurfaceTransaction_setBufferWithRelease :: proc "c" ( transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, buffer: ^AHardwareBuffer, acquire_fence_fd: i32, _context: rawptr, func: ASurfaceTransaction_OnBufferRelease, ) ---
*
* Updates the AHardwareBuffer displayed for \a surface_control. If not -1, the * acquire_fence_fd should be a file descriptor that is signaled when all pending work * for the buffer is complete and the buffer can be safely read. * * The frameworks takes ownership of the \a acquire_fence_fd passed and is responsible * for closing it. * * Note that the buffer must be allocated with AHARDWAREBUFFER_USAGE_GPU_SAMPLED_IMAGE * as the surface control might be composited using the GPU. * * When the buffer is ready to be reused, the ASurfaceTransaction_OnBufferRelease * callback will be invoked. If the buffer is null, the callback will not be invoked. * * Available since API level 36.
ASurfaceTransaction_setColor ¶
ASurfaceTransaction_setColor :: proc "c" ( transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, r: f32, g: f32, b: f32, alpha: f32, dataspace: ADataSpace, ) ---
*
* Updates the color for \a surface_control. This will make the background color for the * ASurfaceControl visible in transparent regions of the surface. Colors \a r, \a g, * and \a b must be within the range that is valid for \a dataspace. \a dataspace and \a alpha * will be the dataspace and alpha set for the background color layer. * * Available since API level 29.
ASurfaceTransaction_setCrop ¶
ASurfaceTransaction_setCrop :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, crop: ^ARect) ---
*
* Bounds the surface and its children to the bounds specified. The crop and buffer size will be * used to determine the bounds of the surface. If no crop is specified and the surface has no * buffer, the surface bounds is only constrained by the size of its parent bounds. * * \param crop The bounds of the crop to apply. * * Available since API level 31.
ASurfaceTransaction_setDamageRegion ¶
ASurfaceTransaction_setDamageRegion :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, rects: [^]ARect, count: u32) ---
*
* Updates the region for the content on this surface updated in this * transaction. If unspecified, the complete surface is assumed to be damaged. * * Available since API level 29.
ASurfaceTransaction_setDesiredHdrHeadroom ¶
ASurfaceTransaction_setDesiredHdrHeadroom :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, desiredHeadroom: f32) ---
*
* Sets the desired HDR headroom for the layer. See: ASurfaceTransaction_setExtendedRangeBrightness, * prefer using this API for formats that conform to HDR standards like HLG or HDR10, that do not * communicate a HDR/SDR ratio as part of generating the buffer. * * @param surface_control The layer whose desired HDR headroom is being specified * * @param desiredHeadroom The desired HDR/SDR ratio as represented as peakHdrBrightnessInNits / * targetSdrWhitePointInNits. This can be used to communicate the max * desired brightness range of the panel. The system may not be able to, or * may choose not to, deliver the requested range. * * While requesting a large desired ratio will result in the most * dynamic range, voluntarily reducing the requested range can help * improve battery life as well as can improve quality by ensuring * greater bit depth is allocated to the luminance range in use. * * Default value is 0.0f and indicates that the system will choose the best * headroom for this surface control's content. Typically, this means that * HLG/PQ encoded content will be displayed with some HDR headroom greater * than 1.0. * * When called after ASurfaceTransaction_setExtendedRangeBrightness, the * desiredHeadroom will override the desiredRatio provided by * ASurfaceTransaction_setExtendedRangeBrightness. Conversely, when called * before ASurfaceTransaction_setExtendedRangeBrightness, the desiredRatio * provided by ASurfaceTransaction_setExtendedRangeBrightness will override * the desiredHeadroom. * * Must be finite && >= 1.0f or 0.0f to indicate there is no desired * headroom. * * Available since API level 35.
ASurfaceTransaction_setDesiredPresentTime ¶
ASurfaceTransaction_setDesiredPresentTime :: proc "c" (transaction: ^ASurfaceTransaction, desiredPresentTime: i64) ---
*
* Specifies a desiredPresentTime for the transaction. The framework will try to present * the transaction at or after the time specified. * * Transactions will not be presented until all of their acquire fences have signaled even if the * app requests an earlier present time. * * If an earlier transaction has a desired present time of x, and a later transaction has a desired * present time that is before x, the later transaction will not preempt the earlier transaction. * * Available since API level 29.
ASurfaceTransaction_setEnableBackPressure ¶
ASurfaceTransaction_setEnableBackPressure :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, enableBackPressure: bool) ---
*
* Indicate whether to enable backpressure for buffer submission to a given SurfaceControl. * * By default backpressure is disabled, which means submitting a buffer prior to receiving * a callback for the previous buffer could lead to that buffer being "dropped". In cases * where you are selecting for latency, this may be a desirable behavior! We had a new buffer * ready, why shouldn't we show it? * * When back pressure is enabled, each buffer will be required to be presented * before it is released and the callback delivered * (absent the whole SurfaceControl being removed). * * Most apps are likely to have some sort of backpressure internally, e.g. you are * waiting on the callback from frame N-2 before starting frame N. In high refresh * rate scenarios there may not be much time between SurfaceFlinger completing frame * N-1 (and therefore releasing buffer N-2) and beginning frame N. This means * your app may not have enough time to respond in the callback. Using this flag * and pushing buffers earlier for server side queuing will be advantageous * in such cases. * * \param transaction The transaction in which to make the change. * \param surface_control The ASurfaceControl on which to control buffer backpressure behavior. * \param enableBackPressure Whether to enable back pressure. * * Available since API level 31.
ASurfaceTransaction_setExtendedRangeBrightness ¶
ASurfaceTransaction_setExtendedRangeBrightness :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, currentBufferRatio: f32, desiredRatio: f32) ---
*
* Sets the desired extended range brightness for the layer. This only applies for layers whose * dataspace has RANGE_EXTENDED set on it. See: ASurfaceTransaction_setDesiredHdrHeadroom, prefer * using this API for formats that encode an HDR/SDR ratio as part of generating the buffer. * * @param surface_control The layer whose extended range brightness is being specified * @param currentBufferRatio The current HDR/SDR ratio of the current buffer as represented as * peakHdrBrightnessInNits / targetSdrWhitePointInNits. For example if the * buffer was rendered with a target SDR whitepoint of 100nits and a max * display brightness of 200nits, this should be set to 2.0f. * * Default value is 1.0f. * * Transfer functions that encode their own brightness ranges, such as * HLG or PQ, should also set this to 1.0f and instead communicate * extended content brightness information via metadata such as CTA861_3 * or SMPTE2086. * * Must be finite && >= 1.0f * * @param desiredRatio The desired HDR/SDR ratio as represented as peakHdrBrightnessInNits / * targetSdrWhitePointInNits. This can be used to communicate the max desired * brightness range. This is similar to the "max luminance" value in other * HDR metadata formats, but represented as a ratio of the target SDR whitepoint * to the max display brightness. The system may not be able to, or may choose * not to, deliver the requested range. * * While requesting a large desired ratio will result in the most * dynamic range, voluntarily reducing the requested range can help * improve battery life as well as can improve quality by ensuring * greater bit depth is allocated to the luminance range in use. * * Default value is 1.0f and indicates that extended range brightness * is not being used, so the resulting SDR or HDR behavior will be * determined entirely by the dataspace being used (ie, typically SDR * however PQ or HLG transfer functions will still result in HDR) * * When called after ASurfaceTransaction_setDesiredHdrHeadroom, the * desiredRatio will override the desiredHeadroom provided by * ASurfaceTransaction_setDesiredHdrHeadroom. Conversely, when called before * ASurfaceTransaction_setDesiredHdrHeadroom, the desiredHeadroom provided by * ASurfaceTransaction_setDesiredHdrHeadroom will override the desiredRatio. * * Must be finite && >= 1.0f * * Available since API level 34.
ASurfaceTransaction_setFrameRate ¶
ASurfaceTransaction_setFrameRate :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, frameRate: f32, compatibility: ANativeWindow_FrameRateCompatibility) ---
*
* Same as ASurfaceTransaction_setFrameRateWithChangeStrategy(transaction, surface_control, * frameRate, compatibility, ANATIVEWINDOW_CHANGE_FRAME_RATE_ONLY_IF_SEAMLESS). * * See ASurfaceTransaction_setFrameRateWithChangeStrategy(). * * Available since API level 30.
ASurfaceTransaction_setFrameRateWithChangeStrategy ¶
ASurfaceTransaction_setFrameRateWithChangeStrategy :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, frameRate: f32, compatibility: ANativeWindow_FrameRateCompatibility, changeFrameRateStrategy: ANativeWindow_ChangeFrameRateStrategy) ---
*
* Sets the intended frame rate for \a surface_control. * * On devices that are capable of running the display at different refresh rates, the system may * choose a display refresh rate to better match this surface's frame rate. Usage of this API won't * directly affect the application's frame production pipeline. However, because the system may * change the display refresh rate, calls to this function may result in changes to Choreographer * callback timings, and changes to the time interval at which the system releases buffers back to * the application. * * You can register for changes in the refresh rate using * \a AChoreographer_registerRefreshRateCallback. * * See ASurfaceTransaction_clearFrameRate(). * * \param frameRate is the intended frame rate of this surface, in frames per second. 0 is a special * value that indicates the app will accept the system's choice for the display frame rate, which is * the default behavior if this function isn't called. The frameRate param does <em>not</em> need to * be a valid refresh rate for this device's display - e.g., it's fine to pass 30fps to a device * that can only run the display at 60fps. * * \param compatibility The frame rate compatibility of this surface. The compatibility value may * influence the system's choice of display frame rate. To specify a compatibility use the * ANATIVEWINDOW_FRAME_RATE_COMPATIBILITY_* enum. This parameter is ignored when frameRate is 0. * * \param changeFrameRateStrategy Whether display refresh rate transitions caused by this * surface should be seamless. A seamless transition is one that doesn't have any visual * interruptions, such as a black screen for a second or two. See the * ANATIVEWINDOW_CHANGE_FRAME_RATE_* values. This parameter is ignored when frameRate is 0. * * Available since API level 31.
ASurfaceTransaction_setFrameTimeline ¶
ASurfaceTransaction_setFrameTimeline :: proc "c" (transaction: ^ASurfaceTransaction, vsyncId: i64) ---
*
* Sets the frame timeline to use in SurfaceFlinger. * * A frame timeline should be chosen based on the frame deadline the application * can meet when rendering the frame and the application's desired presentation time. * By setting a frame timeline, SurfaceFlinger tries to present the frame at the corresponding * expected presentation time. * * To receive frame timelines, a callback must be posted to Choreographer using * AChoreographer_postVsyncCallback(). The \c vsyncId can then be extracted from the * callback payload using AChoreographerFrameCallbackData_getFrameTimelineVsyncId(). * * \param vsyncId The vsync ID received from AChoreographer, setting the frame's presentation target * to the corresponding expected presentation time and deadline from the frame to be rendered. A * stale or invalid value will be ignored. * * Available since API level 33.
ASurfaceTransaction_setGeometry ¶
ASurfaceTransaction_setGeometry :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, source: ^ARect, destination: ^ARect, transform: ANativeWindowTransform) ---
*
* \param source The sub-rect within the buffer's content to be rendered inside the surface's area * The surface's source rect is clipped by the bounds of its current buffer. The source rect's width * and height must be > 0. * * \param destination Specifies the rect in the parent's space where this surface will be drawn. The post * source rect bounds are scaled to fit the destination rect. The surface's destination rect is * clipped by the bounds of its parent. The destination rect's width and height must be > 0. * * \param transform The transform applied after the source rect is applied to the buffer. This parameter * should be set to 0 for no transform. To specify a transfrom use the NATIVE_WINDOW_TRANSFORM_* * enum. * * Available since API level 29. * * @deprecated Use setCrop, setPosition, setBufferTransform, and setScale instead. Those functions * provide well defined behavior and allows for more control by the apps. It also allows the caller * to set different properties at different times, instead of having to specify all the desired * properties at once.
ASurfaceTransaction_setHdrMetadata_cta861_3 ¶
ASurfaceTransaction_setHdrMetadata_cta861_3 :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, metadata: ^AHdrMetadata_cta861_3) ---
*
* Sets the CTA 861.3 "HDR Static Metadata Extension" static metadata on a surface. * * When \a metadata is set to null, the framework does not use any cta861.3 metadata when rendering * the surface's buffer. * * Available since API level 29.
ASurfaceTransaction_setHdrMetadata_smpte2086 ¶
ASurfaceTransaction_setHdrMetadata_smpte2086 :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, metadata: ^AHdrMetadata_smpte2086) ---
*
* SMPTE ST 2086 "Mastering Display Color Volume" static metadata * * When \a metadata is set to null, the framework does not use any smpte2086 metadata when rendering * the surface's buffer. * * Available since API level 29.
ASurfaceTransaction_setOnCommit ¶
ASurfaceTransaction_setOnCommit :: proc "c" (transaction: ^ASurfaceTransaction, _context: rawptr, func: ASurfaceTransaction_OnCommit) ---
*
* Sets the callback that will be invoked when the updates from this transaction are applied and are * ready to be presented. This callback will be invoked before the ASurfaceTransaction_OnComplete * callback. * * Available since API level 31.
ASurfaceTransaction_setOnComplete ¶
ASurfaceTransaction_setOnComplete :: proc "c" (transaction: ^ASurfaceTransaction, _context: rawptr, func: ASurfaceTransaction_OnComplete) ---
*
* Sets the callback that will be invoked when the updates from this transaction * are presented. For details on the callback semantics and data, see the * comments on the ASurfaceTransaction_OnComplete declaration above. * * Available since API level 29.
ASurfaceTransaction_setPosition ¶
ASurfaceTransaction_setPosition :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, x: i32, y: i32) ---
*
* Specifies the position in the parent's space where the surface will be drawn. * * \param x The x position to render the surface. * \param y The y position to render the surface. * * Available since API level 31.
ASurfaceTransaction_setScale ¶
ASurfaceTransaction_setScale :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, xScale: f32, yScale: f32) ---
*
* Sets an x and y scale of a surface with (0, 0) as the centerpoint of the scale. * * \param xScale The scale in the x direction. Must be greater than 0. * \param yScale The scale in the y direction. Must be greater than 0. * * Available since API level 31.
ASurfaceTransaction_setVisibility ¶
ASurfaceTransaction_setVisibility :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, visibility: SurfaceTransactionVisibility) ---
*
* Updates the visibility of \a surface_control. If show is set to * ASURFACE_TRANSACTION_VISIBILITY_HIDE, the \a surface_control and all surfaces in its subtree will * be hidden. * * Available since API level 29.
ASurfaceTransaction_setZOrder ¶
ASurfaceTransaction_setZOrder :: proc "c" (transaction: ^ASurfaceTransaction, surface_control: ^ASurfaceControl, z_order: i32) ---
*
* Updates the z order index for \a surface_control. Note that the z order for a surface * is relative to other surfaces which are siblings of this surface. The behavior of sibilings with * the same z order is undefined. * * Z orders may be from MIN_INT32 to MAX_INT32. A layer's default z order index is 0. * * Available since API level 29.
ASystemFontIterator_close ¶
ASystemFontIterator_close :: proc "c" (iterator: ^ASystemFontIterator) ---
*
* Close an opened system font iterator, freeing any related resources. * * Available since API level 29. * * \param iterator a pointer of an iterator for the system fonts. Do nothing if NULL is passed.
ASystemFontIterator_next ¶
ASystemFontIterator_next :: proc "c" (iterator: ^ASystemFontIterator) -> ^AFont ---
*
* Move to the next system font. * * Available since API level 29. * * \param iterator an iterator for the system fonts. Passing NULL is not allowed. * \return a font. If no more font is available, returns nullptr. You need to release the returned * font by ASystemFont_close when it is no longer needed.
ASystemFontIterator_open ¶
ASystemFontIterator_open :: proc "c" () -> ^ASystemFontIterator ---
*
* Create a system font iterator. * * Use ASystemFont_close() to close the iterator. * * Available since API level 29. * * \return a pointer for a newly allocated iterator, nullptr on failure.
AThermal_acquireManager ¶
AThermal_acquireManager :: proc "c" () -> ^AThermalManager ---
*
* Acquire an instance of the thermal manager. This must be freed using
* {@link AThermal_releaseManager}.
*
* Available since API level 30.
*
* @return manager instance on success, nullptr on failure.
AThermal_getCurrentThermalStatus ¶
AThermal_getCurrentThermalStatus :: proc "c" (manager: ^AThermalManager) -> AThermalStatus ---
*
* Gets the current thermal status.
*
* Available since API level 30.
*
* @param manager The manager instance to use to query the thermal status.
* Acquired via {@link AThermal_acquireManager}.
*
* @return current thermal status, ATHERMAL_STATUS_ERROR on failure.
AThermal_getThermalHeadroom ¶
AThermal_getThermalHeadroom :: proc "c" (manager: ^AThermalManager, forecastSeconds: i32) -> f32 ---
*
* Provides an estimate of how much thermal headroom the device currently has before
* hitting severe throttling.
*
* Note that this only attempts to track the headroom of slow-moving sensors, such as
* the skin temperature sensor. This means that there is no benefit to calling this function
* more frequently than about once per second, and attempted to call significantly
* more frequently may result in the function returning {@code NaN}.
*
* In addition, in order to be able to provide an accurate forecast, the system does
* not attempt to forecast until it has multiple temperature samples from which to
* extrapolate. This should only take a few seconds from the time of the first call,
* but during this time, no forecasting will occur, and the current headroom will be
* returned regardless of the value of {@code forecastSeconds}.
*
* The value returned is a non-negative float that represents how much of the thermal envelope
* is in use (or is forecasted to be in use). A value of 1.0 indicates that the device is
* (or will be) throttled at {@link #ATHERMAL_STATUS_SEVERE}. Such throttling can affect the
* CPU, GPU, and other subsystems. Values may exceed 1.0, but there is no implied mapping
* to specific thermal levels beyond that point. This means that values greater than 1.0
* may correspond to {@link #ATHERMAL_STATUS_SEVERE}, but may also represent heavier throttling.
*
* A value of 0.0 corresponds to a fixed distance from 1.0, but does not correspond to any
* particular thermal status or temperature. Values on (0.0, 1.0] may be expected to scale
* linearly with temperature, though temperature changes over time are typically not linear.
* Negative values will be clamped to 0.0 before returning.
*
* Available since API level 31.
*
* @param manager The manager instance to use.
* Acquired via {@link AThermal_acquireManager}.
* @param forecastSeconds how many seconds into the future to forecast. Given that device
* conditions may change at any time, forecasts from further in the
* future will likely be less accurate than forecasts in the near future.
* @return a value greater than equal to 0.0, where 1.0 indicates the SEVERE throttling threshold,
* as described above. Returns NaN if the device does not support this functionality or
* if this function is called significantly faster than once per second.
AThermal_getThermalHeadroomThresholds ¶
AThermal_getThermalHeadroomThresholds :: proc "c" (manager: ^AThermalManager, outThresholds: ^^AThermalHeadroomThreshold, size: ^uint) -> i32 ---
*
* Gets the thermal headroom thresholds for all available thermal status.
*
* A thermal status will only exist in output if the device manufacturer has the
* corresponding threshold defined for at least one of its slow-moving skin temperature
* sensors. If it's set, one should also expect to get it from
* {@link #AThermal_getCurrentThermalStatus} or {@link AThermal_StatusCallback}.
* <p>
* The headroom threshold is used to interpret the possible thermal throttling status based on
* the headroom prediction. For example, if the headroom threshold for
* {@link ATHERMAL_STATUS_LIGHT} is 0.7, and a headroom prediction in 10s returns 0.75
* (or `AThermal_getThermalHeadroom(10)=0.75`), one can expect that in 10 seconds the system
* could be in lightly throttled state if the workload remains the same. The app can consider
* taking actions according to the nearest throttling status the difference between the headroom and
* the threshold.
* <p>
* For new devices it's guaranteed to have a single sensor, but for older devices with multiple
* sensors reporting different threshold values, the minimum threshold is taken to be conservative
* on predictions. Thus, when reading real-time headroom, it's not guaranteed that a real-time value
* of 0.75 (or `AThermal_getThermalHeadroom(0)`=0.75) exceeding the threshold of 0.7 above
* will always come with lightly throttled state
* (or `AThermal_getCurrentThermalStatus()=ATHERMAL_STATUS_LIGHT`) but it can be lower
* (or `AThermal_getCurrentThermalStatus()=ATHERMAL_STATUS_NONE`).
* While it's always guaranteed that the device won't be throttled heavier than the unmet
* threshold's state, so a real-time headroom of 0.75 will never come with
* {@link #ATHERMAL_STATUS_MODERATE} but always lower, and 0.65 will never come with
* {@link ATHERMAL_STATUS_LIGHT} but {@link #ATHERMAL_STATUS_NONE}.
* <p>
* The returned list of thresholds is cached on first successful query and owned by the thermal
* manager, which will not change between calls to this function. The caller should only need to
* free the manager with {@link AThermal_releaseManager}.
*
* Available since API level 35.
*
* @param manager The manager instance to use.
* Acquired via {@link AThermal_acquireManager}.
* @param outThresholds non-null output pointer to null AThermalHeadroomThreshold pointer, which
* will be set to the cached array of thresholds if thermal thresholds are supported
* by the system or device, otherwise nullptr or unmodified.
* @param size non-null output pointer whose value will be set to the size of the threshold array
* or 0 if it's not supported.
* @return 0 on success
* EINVAL if outThresholds or size_t is nullptr, or *outThresholds is not nullptr.
* EPIPE if communication with the system service has failed.
* ENOSYS if the feature is disabled by the current system.
AThermal_registerThermalStatusListener ¶
AThermal_registerThermalStatusListener :: proc "c" (manager: ^AThermalManager, callback: AThermal_StatusCallback, data: rawptr) -> i32 ---
*
* Register the thermal status listener for thermal status change.
*
* Available since API level 30.
*
* @param manager The manager instance to use to register.
* Acquired via {@link AThermal_acquireManager}.
* @param callback The callback function to be called when thermal status updated.
* @param data The data pointer to be passed when callback is called.
*
* @return 0 on success
* EINVAL if the listener and data pointer were previously added and not removed.
* EPERM if the required permission is not held.
* EPIPE if communication with the system service has failed.
AThermal_releaseManager ¶
AThermal_releaseManager :: proc "c" (manager: ^AThermalManager) ---
*
* Release the thermal manager pointer acquired via
* {@link AThermal_acquireManager}.
*
* Available since API level 30.
*
* @param manager The manager to be released.
AThermal_unregisterThermalStatusListener ¶
AThermal_unregisterThermalStatusListener :: proc "c" (manager: ^AThermalManager, callback: AThermal_StatusCallback, data: rawptr) -> i32 ---
*
* Unregister the thermal status listener previously resgistered.
*
* Available since API level 30.
*
* @param manager The manager instance to use to unregister.
* Acquired via {@link AThermal_acquireManager}.
* @param callback The callback function to be called when thermal status updated.
* @param data The data pointer to be passed when callback is called.
*
* @return 0 on success
* EINVAL if the listener and data pointer were not previously added.
* EPERM if the required permission is not held.
* EPIPE if communication with the system service has failed.
ATrace_beginAsyncSection ¶
*
* Writes a trace message to indicate that a given section of code has
* begun. Must be followed by a call to {@link ATrace_endAsyncSection} with the same
* methodName and cookie. Unlike {@link ATrace_beginSection} and {@link ATrace_endSection},
* asynchronous events do not need to be nested. The name and cookie used to
* begin an event must be used to end it.
*
* Available since API level 29.
*
* \param sectionName The method name to appear in the trace.
* \param cookie Unique identifier for distinguishing simultaneous events
ATrace_beginSection ¶
ATrace_beginSection :: proc "c" (sectionName: cstring) ---
*
* Writes a tracing message to indicate that the given section of code has begun. This call must be
* followed by a corresponding call to {@link ATrace_endSection} on the same thread.
*
* Note: At this time the vertical bar character '|' and newline character '\\n' are used internally
* by the tracing mechanism. If \p sectionName contains these characters they will be replaced with a
* space character in the trace.
*
* Available since API level 23.
ATrace_endAsyncSection ¶
*
* Writes a trace message to indicate that the current method has ended.
* Must be called exactly once for each call to {@link ATrace_beginAsyncSection}
* using the same name and cookie.
*
* Available since API level 29.
*
* \param sectionName The method name to appear in the trace.
* \param cookie Unique identifier for distinguishing simultaneous events
ATrace_endSection ¶
ATrace_endSection :: proc "c" () ---
*
* Writes a tracing message to indicate that a given section of code has ended. This call must be
* preceeded by a corresponding call to {@link ATrace_beginSection} on the same thread. Calling this method
* will mark the end of the most recently begun section of code, so care must be taken to ensure
* that {@link ATrace_beginSection}/{@link ATrace_endSection} pairs are properly nested and called from the same thread.
*
* Available since API level 23.
ATrace_isEnabled ¶
ATrace_isEnabled :: proc "c" () -> bool ---
*
* Returns true if tracing is enabled. Use this to avoid expensive computation only necessary * when tracing is enabled. * * Available since API level 23.
ATrace_setCounter ¶
*
* Writes trace message to indicate the value of a given counter. * * Available since API level 29. * * \param counterName The counter name to appear in the trace. * \param counterValue The counter value.
AWorkDuration_create ¶
AWorkDuration_create :: proc "c" () -> ^AWorkDuration ---
*
* Creates a new AWorkDuration. When the client finishes using {@link AWorkDuration}, it should
* call {@link AWorkDuration_release()} to destroy {@link AWorkDuration} and release all resources
* associated with it.
*
* Available since API level 35.
*
* @return AWorkDuration on success and nullptr otherwise.
AWorkDuration_release ¶
AWorkDuration_release :: proc "c" (aWorkDuration: ^AWorkDuration) ---
*
* Destroys {@link AWorkDuration} and free all resources associated to it.
*
* Available since API level 35.
*
* @param aWorkDuration The {@link AWorkDuration} created by calling {@link AWorkDuration_create()}
AWorkDuration_setActualCpuDurationNanos ¶
AWorkDuration_setActualCpuDurationNanos :: proc "c" (aWorkDuration: ^AWorkDuration, actualCpuDurationNanos: i64) ---
*
* Sets the actual CPU work duration in nanoseconds.
*
* Available since API level 35.
*
* @param aWorkDuration The {@link AWorkDuration} created by calling {@link AWorkDuration_create()}
* @param actualCpuDurationNanos The actual CPU work duration in nanoseconds. This number must be
* greater than or equal to zero. If it is equal to zero, that means the CPU was not
* measured.
AWorkDuration_setActualGpuDurationNanos ¶
AWorkDuration_setActualGpuDurationNanos :: proc "c" (aWorkDuration: ^AWorkDuration, actualGpuDurationNanos: i64) ---
*
* Sets the actual GPU work duration in nanoseconds.
*
* Available since API level 35.
*
* @param aWorkDuration The {@link AWorkDuration} created by calling {@link AWorkDuration_create()}.
* @param actualGpuDurationNanos The actual GPU work duration in nanoseconds, the number must be
* greater than or equal to zero. If it is equal to zero, that means the GPU was not
* measured.
AWorkDuration_setActualTotalDurationNanos ¶
AWorkDuration_setActualTotalDurationNanos :: proc "c" (aWorkDuration: ^AWorkDuration, actualTotalDurationNanos: i64) ---
*
* Sets the actual total work duration in nanoseconds.
*
* Available since API level 35.
*
* @param aWorkDuration The {@link AWorkDuration} created by calling {@link AWorkDuration_create()}
* @param actualTotalDurationNanos The actual total work duration in nanoseconds. This number must
* be greater than zero.
AWorkDuration_setWorkPeriodStartTimestampNanos ¶
AWorkDuration_setWorkPeriodStartTimestampNanos :: proc "c" (aWorkDuration: ^AWorkDuration, workPeriodStartTimestampNanos: i64) ---
*
* Sets the work period start timestamp in nanoseconds.
*
* Available since API level 35.
*
* @param aWorkDuration The {@link AWorkDuration} created by calling {@link AWorkDuration_create()}
* @param workPeriodStartTimestampNanos The work period start timestamp in nanoseconds based on
* CLOCK_MONOTONIC about when the work starts. This timestamp must be greater than zero.
AndroidBitmap_compress ¶
AndroidBitmap_compress :: proc "c" ( info: ^BitmapInfo, dataspace: ADataSpace, pixels: rawptr, format: AndroidBitmapCompressFormat, quality: i32, userContext: rawptr, fn: AndroidBitmap_CompressWriteFunc, ) -> ABitmapResult ---
*
* Compress |pixels| as described by |info|.
*
* Available since API level 30.
*
* @param info Description of the pixels to compress.
* @param dataspace {@link ADataSpace} describing the color space of the
* pixels.
* @param pixels Pointer to pixels to compress.
* @param format {@link AndroidBitmapCompressFormat} to compress to.
* @param quality Hint to the compressor, 0-100. The value is interpreted
* differently depending on the
* {@link AndroidBitmapCompressFormat}.
* @param userContext User-defined data which will be passed to the supplied
* {@link AndroidBitmap_CompressWriteFunc} each time it is
* called. May be null.
* @param fn Function that writes the compressed data. Will be called each time
* the compressor has compressed more data that is ready to be
* written. May be called more than once for each call to this method.
* May not be null.
* @return AndroidBitmap functions result code.
AndroidBitmap_getDataSpace ¶
AndroidBitmap_getDataSpace :: proc "c" (env: ^^JNINativeInterface, jbitmap: jobject) -> ADataSpace ---
*
* Given a java bitmap object, return its {@link ADataSpace}.
*
* Note that {@link ADataSpace} only exposes a few values. This may return
* {@link ADATASPACE_UNKNOWN}, even for Named ColorSpaces, if they have no
* corresponding ADataSpace.
*
* Available since API level 30.
AndroidBitmap_getHardwareBuffer ¶
AndroidBitmap_getHardwareBuffer :: proc "c" (env: ^^JNINativeInterface, bitmap: jobject, outBuffer: ^^AHardwareBuffer) -> ABitmapResult ---
*
* Retrieve the native object associated with a HARDWARE Bitmap.
*
* Client must not modify it while a Bitmap is wrapping it.
*
* Available since API level 30.
*
* @param env Handle to the JNI environment pointer.
* @param bitmap Handle to an android.graphics.Bitmap.
* @param outBuffer On success, is set to a pointer to the
* {@link AHardwareBuffer} associated with bitmap. This acquires
* a reference on the buffer, and the client must call
* {@link AHardwareBuffer_release} when finished with it.
* @return AndroidBitmap functions result code.
* {@link ANDROID_BITMAP_RESULT_BAD_PARAMETER} if bitmap is not a
* HARDWARE Bitmap.
AndroidBitmap_getInfo ¶
AndroidBitmap_getInfo :: proc "c" (env: ^^JNINativeInterface, jbitmap: jobject, info: ^BitmapInfo) -> ABitmapResult ---
*
* Given a java bitmap object, fill out the {@link BitmapInfo} struct for it.
* If the call fails, the info parameter will be ignored.
AndroidBitmap_lockPixels ¶
AndroidBitmap_lockPixels :: proc "c" (env: ^^JNINativeInterface, jbitmap: jobject, addrPtr: ^rawptr) -> ABitmapResult ---
*
* Given a java bitmap object, attempt to lock the pixel address. * Locking will ensure that the memory for the pixels will not move * until the unlockPixels call, and ensure that, if the pixels had been * previously purged, they will have been restored. * * If this call succeeds, it must be balanced by a call to * AndroidBitmap_unlockPixels, after which time the address of the pixels should * no longer be used. * * If this succeeds, *addrPtr will be set to the pixel address. If the call * fails, addrPtr will be ignored.
AndroidBitmap_unlockPixels ¶
AndroidBitmap_unlockPixels :: proc "c" (env: ^^JNINativeInterface, jbitmap: jobject) -> ABitmapResult ---
*
* Call this to balance a successful call to AndroidBitmap_lockPixels.
android_dlopen_ext ¶
android_dlopen_ext :: proc "c" (__filename: cstring, __flags: i32, __info: ^android_dlextinfo) -> rawptr ---
*
* Opens the given library. The `__filename` and `__flags` arguments are * the same as for [dlopen(3)](http://man7.org/linux/man-pages/man3/dlopen.3.html), * with the Android-specific flags supplied via the `flags` member of `__info`. * * Available since API level 21.
android_fdsan_close_with_tag ¶
* Close a file descriptor with a tag, and resets the tag to 0. * * Logs and aborts if the tag is incorrect. * * Available since API level 29.
android_fdsan_create_owner_tag ¶
android_fdsan_create_owner_tag :: proc "c" (type: android_fdsan_owner_type, tag: u64) -> u64 ---
* Create an owner tag with the specified type and least significant 56 bits of tag. * * Available since API level 29.
android_fdsan_exchange_owner_tag ¶
* Exchange a file descriptor's tag. * * Logs and aborts if the fd's tag does not match expected_tag. * * Available since API level 29.
android_fdsan_get_error_level ¶
android_fdsan_get_error_level :: proc "c" () -> android_fdsan_error_level ---
* Get the error level. * * Available since API level 29.
android_fdsan_get_owner_tag ¶
* Get a file descriptor's current owner tag. * * Returns 0 for untagged and invalid file descriptors. * * Available since API level 29.
android_fdsan_get_tag_type ¶
* Get an owner tag's string representation. * * The return value points to memory with static lifetime, do not attempt to modify it. * * Available since API level 29.
android_fdsan_get_tag_value ¶
* Get an owner tag's value, with the type masked off. * * Available since API level 29.
android_fdsan_set_error_level ¶
android_fdsan_set_error_level :: proc "c" (new_level: android_fdsan_error_level) -> android_fdsan_error_level ---
* Set the error level and return the previous state. * * Error checking is automatically disabled in the child of a fork, to maintain * compatibility with code that forks, closes all file descriptors, and then * execs. * * In cases such as the zygote, where the child has no intention of calling * exec, call this function to reenable fdsan checks. * * This function is not thread-safe and does not synchronize with checks of the * value, and so should probably only be called in single-threaded contexts * (e.g. postfork). * * Available since API level 29.
android_fdsan_set_error_level_from_property ¶
android_fdsan_set_error_level_from_property :: proc "c" (default_level: android_fdsan_error_level) -> android_fdsan_error_level ---
* Set the error level to the global setting if available, or a default value. * * Available since API level 30.
android_getaddrinfofornetwork ¶
android_getaddrinfofornetwork :: proc "c" (network: net_handle_t, node: cstring, service: cstring, hints: ^addrinfo, res: ^^addrinfo) -> i32 ---
*
* Perform hostname resolution via the DNS servers associated with |network|. * * All arguments (apart from |network|) are used identically as those passed * to getaddrinfo(3). Return and error values are identical to those of * getaddrinfo(3), and in particular gai_strerror(3) can be used as expected. * Similar to getaddrinfo(3): * - |hints| may be NULL (in which case man page documented defaults apply) * - either |node| or |service| may be NULL, but not both * - |res| must not be NULL * * This is the equivalent of: [android.net.Network#getAllByName()](https://developer.android.com/reference/android/net/Network.html#getAllByName(java.lang.String)) * * Available since API level 23.
android_getprocdns ¶
android_getprocdns :: proc "c" (network: ^net_handle_t) -> i32 ---
*
* Gets the |network| to which domain name resolutions are bound on the * current process. * * Returns 0 on success, or -1 setting errno to EINVAL if a null pointer is * passed in. * * Available since API level 31.
android_getprocnetwork ¶
android_getprocnetwork :: proc "c" (network: ^net_handle_t) -> i32 ---
*
* Gets the |network| bound to the current process, as per android_setprocnetwork. * * This is the equivalent of: [android.net.ConnectivityManager#getBoundNetworkForProcess()](https://developer.android.com/reference/android/net/ConnectivityManager.html#getBoundNetworkForProcess(android.net.Network)) * Returns 0 on success, or -1 setting errno to EINVAL if a null pointer is * passed in. * * * Available since API level 31.
android_res_cancel ¶
android_res_cancel :: proc "c" (nsend_fd: i32) ---
*
* Attempts to cancel the in-progress query associated with the |nsend_fd| * descriptor. * * Available since API level 29.
android_res_nquery ¶
android_res_nquery :: proc "c" (network: net_handle_t, dname: cstring, ns_class: i32, ns_type: i32, flags: bit_set[ResNsendFlagsBits; u32]) -> i32 ---
*
* Look up the {|ns_class|, |ns_type|} Resource Record (RR) associated
* with Domain Name |dname| on the given |network|.
* The typical value for |ns_class| is ns_c_in, while |type| can be any
* record type (for instance, ns_t_aaaa or ns_t_txt).
* |flags| is a additional config to control actual querying behavior, see
* ResNsendFlags for detail.
*
* Returns a file descriptor to watch for read events, or a negative
* POSIX error code (see errno.h) if an immediate error occurs.
*
* Available since API level 29.
android_res_nresult ¶
*
* Read a result for the query associated with the |fd| descriptor. * Closes |fd| before returning. * * Available since 29. * * Returns: * < 0: negative POSIX error code (see errno.h for possible values). |rcode| is not set. * >= 0: length of |answer|. |rcode| is the resolver return code (e.g., ns_r_nxdomain)
android_res_nsend ¶
android_res_nsend :: proc "c" (network: net_handle_t, msg: [^]u8, msglen: uint, flags: bit_set[ResNsendFlagsBits; u32]) -> i32 ---
*
* Issue the query |msg| on the given |network|. * |flags| is a additional config to control actual querying behavior, see * ResNsendFlags for detail. * * Returns a file descriptor to watch for read events, or a negative * POSIX error code (see errno.h) if an immediate error occurs. * * Available since API level 29.
android_set_abort_message ¶
android_set_abort_message :: proc "c" (__msg: cstring) ---
android_setprocdns ¶
android_setprocdns :: proc "c" (network: net_handle_t) -> i32 ---
*
* Binds domain name resolutions performed by this process to |network|. * android_setprocnetwork takes precedence over this setting. * * To clear a previous process binding, invoke with NETWORK_UNSPECIFIED. * On success 0 is returned. On error -1 is returned, and errno is set. * * Available since API level 31.
android_setprocnetwork ¶
android_setprocnetwork :: proc "c" (network: net_handle_t) -> i32 ---
*
* Binds the current process to |network|. All sockets created in the future * (and not explicitly bound via android_setsocknetwork()) will be bound to * |network|. All host name resolutions will be limited to |network| as well. * Note that if the network identified by |network| ever disconnects, all * sockets created in this way will cease to work and all host name * resolutions will fail. This is by design so an application doesn't * accidentally use sockets it thinks are still bound to a particular network. * * To clear a previous process binding, invoke with NETWORK_UNSPECIFIED. * * This is the equivalent of: [android.net.ConnectivityManager#bindProcessToNetwork()](https://developer.android.com/reference/android/net/ConnectivityManager.html#bindProcessToNetwork(android.net.Network)) * * Available since API level 23.
android_setsocknetwork ¶
android_setsocknetwork :: proc "c" (network: net_handle_t, fd: i32) -> i32 ---
*
* Set the network to be used by the given socket file descriptor. * * To clear a previous socket binding, invoke with NETWORK_UNSPECIFIED. * * This is the equivalent of: [android.net.Network#bindSocket()](https://developer.android.com/reference/android/net/Network.html#bindSocket(java.net.Socket)) * * Available since API level 23.
android_tag_socket ¶
* Set the socket tag for traffic statistics on the specified socket. * * This function tags the socket with the caller's UID (accepting blame for * future traffic performed on this socket) even if the socket was originally * opened by another UID or was previously tagged by another UID. Subsequent * calls always replace any existing parameters. The socket tag is kept when the * socket is sent to another process using binder IPCs or other mechanisms such * as UNIX socket fd passing. The tag is a value defined by the caller and used * together with uid for data traffic accounting, so that the function callers * can account different types of data usage for a uid. * * Returns 0 on success, or a negative POSIX error code (see errno.h) on * failure. * * Some possible error codes: * -EBADF Bad socketfd. * -EPERM No permission. * -EAFNOSUPPORT Socket family is neither AF_INET nor AF_INET6. * -EPROTONOSUPPORT Socket protocol is neither IPPROTO_UDP nor IPPROTO_TCP. * -EMFILE Too many stats entries. * There are still other error codes that may provided by -errno of * [getsockopt()](https://man7.org/linux/man-pages/man2/getsockopt.2.html) or by * BPF maps read/write sys calls, which are set appropriately. * * Available since API level 33.
android_tag_socket_with_uid ¶
* Set the socket tag and owning UID for traffic statistics on the specified * socket. * * Subsequent calls always replace any existing parameters. The socket tag and * uid (if set) are kept when the socket is sent to another process using binder * IPCs or other mechanisms such as UNIX socket fd passing. Any app can accept * blame for future traffic performed on a socket originally created by another * app by calling this method with its own UID (or calling * android_tag_socket(int sockfd, int tag)). However, only apps holding the * android.Manifest.permission#UPDATE_DEVICE_STATS permission may assign blame * to another UIDs. If unset (default) the socket tag is 0, and the uid is the * socket creator's uid. * * Returns 0 on success, or a negative POSIX error code (see errno.h) on * failure. * * Available since API level 33.
android_untag_socket ¶
* Untag a network socket. * * Future traffic on this socket will no longer be associated with any * previously configured tag and uid. If the socket was created by another UID * or was previously tagged by another UID, calling this function will clear the * statistics parameters, and thus the UID blamed for traffic on the socket will * be the UID that originally created the socket, even if the socket was * subsequently tagged by a different UID. * * Returns 0 on success, or a negative POSIX error code (see errno.h) on * failure. * * One of possible error code: * -EBADF Bad socketfd. * Other error codes are either provided by -errno of * [getsockopt()](https://man7.org/linux/man-pages/man2/getsockopt.2.html) or by * BPF map element deletion sys call, which are set appropriately. * * Available since API level 33.
app_dummy ¶
app_dummy :: proc "c" () ---
asset_read_file ¶
asset_read_file :: proc(path: string, allocator := context.allocator) -> (data: []u8, err: AssetFileError = .None) {…}
get_android_app ¶
get_android_app :: proc "contextless" () -> ^android_app {…}
sync_file_info_free ¶
sync_file_info_free :: proc "c" (info: ^sync_file_info) ---
*
* Free a struct sync_file_info structure * * Available since API level 26.
sync_get_fence_info ¶
sync_get_fence_info :: proc(info: ^sync_file_info) -> ^sync_fence_info {…}
* Get the array of fence infos from the sync file's info. * The returned array is owned by the parent sync file info, and has info->num_fences entries. * Available since API level 26.
sync_merge ¶
*
* Merge two sync files. * * This produces a new sync file with the given name which has the union of the * two original sync file's fences redundant fences may be removed. * * If one of the input sync files is signaled or invalid, then this function * may behave like dup(): the new file descriptor refers to the valid/unsignaled * sync file with its original name, rather than a new sync file. * * The original fences remain valid, and the caller is responsible for closing * them. * * Available since API level 26.
Procedure Groups
This section is empty.
Source Files
- NeuralNetworks.odin
- NeuralNetworksTypes.odin
- android_native_app_glue.odin
- asset_manager.odin
- asset_manager_jni.odin
- bitmap.odin
- choreographer.odin
- configuration.odin
- data_space.odin
- dlext.odin
- doc.odin
- extra.odin
- fdsan.odin
- file_descriptor_jni.odin
- font.odin
- font_matcher.odin
- hardware_buffer.odin
- hardware_buffer_jni.odin
- hdr_metadata.odin
- imagedecoder.odin
- input.odin
- jni.odin
- keycodes.odin
- log.odin
- looper.odin
- multinetwork.odin
- native_acitivty.odin
- native_window.odin
- native_window_jni.odin
- ndk-build.odin
- obb.odin
- performance_hint.odin
- permission_manager.odin
- rect.odin
- sensor.odin
- set_abort_message.odin
- sharedmem.odin
- sharedmem_jni.odin
- storage_manager.odin
- surface_control.odin
- surface_texture.odin
- surface_texture_jni.odin
- sync.odin
- system_fonts.odin
- thermal.odin
- trace.odin
- types.odin
- window.odin
Generation Information
Generated with odin version dev-v0.0.1 (vendor "odin") Linux_amd64 @ 2026-01-30 10:23:16.679828558 +0000 UTC